I0208 10:47:13.716806 8 e2e.go:224] Starting e2e run "5d59a491-4a60-11ea-95d6-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581158833 - Will randomize all specs Will run 201 of 2164 specs Feb 8 10:47:14.001: INFO: >>> kubeConfig: /root/.kube/config Feb 8 10:47:14.007: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 8 10:47:14.057: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 8 10:47:14.246: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 8 10:47:14.246: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 8 10:47:14.246: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 8 10:47:14.276: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 8 10:47:14.276: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 8 10:47:14.276: INFO: e2e test version: v1.13.12 Feb 8 10:47:14.285: INFO: kube-apiserver version: v1.13.8 SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:47:14.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Feb 8 10:47:15.287: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-5e9db17b-4a60-11ea-95d6-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 8 10:47:15.355: INFO: Waiting up to 5m0s for pod "pod-configmaps-5ea1de13-4a60-11ea-95d6-0242ac110005" in namespace "e2e-tests-configmap-ncg8r" to be "success or failure" Feb 8 10:47:15.380: INFO: Pod "pod-configmaps-5ea1de13-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.524654ms Feb 8 10:47:17.391: INFO: Pod "pod-configmaps-5ea1de13-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03553288s Feb 8 10:47:19.405: INFO: Pod "pod-configmaps-5ea1de13-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04994939s Feb 8 10:47:22.431: INFO: Pod "pod-configmaps-5ea1de13-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.075627531s Feb 8 10:47:24.445: INFO: Pod "pod-configmaps-5ea1de13-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.089986726s Feb 8 10:47:26.475: INFO: Pod "pod-configmaps-5ea1de13-4a60-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.11979742s STEP: Saw pod success Feb 8 10:47:26.475: INFO: Pod "pod-configmaps-5ea1de13-4a60-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 10:47:26.524: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-5ea1de13-4a60-11ea-95d6-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 8 10:47:27.556: INFO: Waiting for pod pod-configmaps-5ea1de13-4a60-11ea-95d6-0242ac110005 to disappear Feb 8 10:47:27.570: INFO: Pod pod-configmaps-5ea1de13-4a60-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:47:27.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-ncg8r" for this suite. Feb 8 10:47:33.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:47:33.827: INFO: namespace: e2e-tests-configmap-ncg8r, resource: bindings, ignored listing per whitelist Feb 8 10:47:33.839: INFO: namespace e2e-tests-configmap-ncg8r deletion completed in 6.260971652s • [SLOW TEST:19.554 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:47:33.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 8 10:47:34.324: INFO: Waiting up to 5m0s for pod "pod-69e53be4-4a60-11ea-95d6-0242ac110005" in namespace "e2e-tests-emptydir-twd8s" to be "success or failure" Feb 8 10:47:34.363: INFO: Pod "pod-69e53be4-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.006401ms Feb 8 10:47:36.373: INFO: Pod "pod-69e53be4-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048826597s Feb 8 10:47:38.384: INFO: Pod "pod-69e53be4-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060248285s Feb 8 10:47:40.408: INFO: Pod "pod-69e53be4-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083811145s Feb 8 10:47:42.431: INFO: Pod "pod-69e53be4-4a60-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106868184s STEP: Saw pod success Feb 8 10:47:42.431: INFO: Pod "pod-69e53be4-4a60-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 10:47:42.448: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-69e53be4-4a60-11ea-95d6-0242ac110005 container test-container: STEP: delete the pod Feb 8 10:47:42.681: INFO: Waiting for pod pod-69e53be4-4a60-11ea-95d6-0242ac110005 to disappear Feb 8 10:47:42.867: INFO: Pod pod-69e53be4-4a60-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:47:42.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-twd8s" for this suite. Feb 8 10:47:48.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:47:49.060: INFO: namespace: e2e-tests-emptydir-twd8s, resource: bindings, ignored listing per whitelist Feb 8 10:47:49.111: INFO: namespace e2e-tests-emptydir-twd8s deletion completed in 6.217710081s • [SLOW TEST:15.272 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:47:49.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 8 10:47:49.324: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 8 10:47:49.415: INFO: Waiting for terminating namespaces to be deleted... Feb 8 10:47:49.421: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 8 10:47:49.434: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 8 10:47:49.434: INFO: Container coredns ready: true, restart count 0 Feb 8 10:47:49.434: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 8 10:47:49.434: INFO: Container kube-proxy ready: true, restart count 0 Feb 8 10:47:49.434: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 8 10:47:49.434: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 8 10:47:49.434: INFO: Container weave ready: true, restart count 0 Feb 8 10:47:49.434: INFO: Container weave-npc ready: true, restart count 0 Feb 8 10:47:49.434: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 8 10:47:49.434: INFO: Container coredns ready: true, restart count 0 Feb 8 10:47:49.434: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 8 10:47:49.434: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 8 10:47:49.434: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-server-hu5at5svl7ps Feb 8 10:47:49.480: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 8 10:47:49.480: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 8 10:47:49.480: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Feb 8 10:47:49.480: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps Feb 8 10:47:49.480: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps Feb 8 10:47:49.480: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Feb 8 10:47:49.480: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 8 10:47:49.480: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-72fe4665-4a60-11ea-95d6-0242ac110005.15f1679d53f45d4e], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-998d9/filler-pod-72fe4665-4a60-11ea-95d6-0242ac110005 to hunter-server-hu5at5svl7ps] STEP: Considering event: Type = [Normal], Name = [filler-pod-72fe4665-4a60-11ea-95d6-0242ac110005.15f1679e4b662f09], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-72fe4665-4a60-11ea-95d6-0242ac110005.15f1679ef54588ca], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-72fe4665-4a60-11ea-95d6-0242ac110005.15f1679f27fa31a8], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f1679faad97e5b], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.] STEP: removing the label node off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:48:00.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-998d9" for this suite. Feb 8 10:48:06.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:48:07.084: INFO: namespace: e2e-tests-sched-pred-998d9, resource: bindings, ignored listing per whitelist Feb 8 10:48:07.137: INFO: namespace e2e-tests-sched-pred-998d9 deletion completed in 6.352375314s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:18.025 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:48:07.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:48:20.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-lhbv7" for this suite. Feb 8 10:49:02.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:49:02.634: INFO: namespace: e2e-tests-kubelet-test-lhbv7, resource: bindings, ignored listing per whitelist Feb 8 10:49:02.780: INFO: namespace e2e-tests-kubelet-test-lhbv7 deletion completed in 42.355682784s • [SLOW TEST:55.643 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:49:02.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 8 10:49:03.092: INFO: Waiting up to 5m0s for pod "pod-9edbc9a0-4a60-11ea-95d6-0242ac110005" in namespace "e2e-tests-emptydir-dk4px" to be "success or failure" Feb 8 10:49:03.107: INFO: Pod "pod-9edbc9a0-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.121924ms Feb 8 10:49:05.128: INFO: Pod "pod-9edbc9a0-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03534542s Feb 8 10:49:07.158: INFO: Pod "pod-9edbc9a0-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065788494s Feb 8 10:49:09.235: INFO: Pod "pod-9edbc9a0-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143229188s Feb 8 10:49:11.273: INFO: Pod "pod-9edbc9a0-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181161562s Feb 8 10:49:13.282: INFO: Pod "pod-9edbc9a0-4a60-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.189879669s STEP: Saw pod success Feb 8 10:49:13.282: INFO: Pod "pod-9edbc9a0-4a60-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 10:49:13.286: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9edbc9a0-4a60-11ea-95d6-0242ac110005 container test-container: STEP: delete the pod Feb 8 10:49:13.898: INFO: Waiting for pod pod-9edbc9a0-4a60-11ea-95d6-0242ac110005 to disappear Feb 8 10:49:13.979: INFO: Pod pod-9edbc9a0-4a60-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:49:13.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dk4px" for this suite. Feb 8 10:49:20.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:49:20.694: INFO: namespace: e2e-tests-emptydir-dk4px, resource: bindings, ignored listing per whitelist Feb 8 10:49:20.744: INFO: namespace e2e-tests-emptydir-dk4px deletion completed in 6.312893357s • [SLOW TEST:17.964 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:49:20.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 8 10:49:21.076: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a9942e43-4a60-11ea-95d6-0242ac110005" in namespace "e2e-tests-downward-api-p59kf" to be "success or failure" Feb 8 10:49:21.090: INFO: Pod "downwardapi-volume-a9942e43-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.271873ms Feb 8 10:49:23.315: INFO: Pod "downwardapi-volume-a9942e43-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23858652s Feb 8 10:49:25.329: INFO: Pod "downwardapi-volume-a9942e43-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.252924955s Feb 8 10:49:29.046: INFO: Pod "downwardapi-volume-a9942e43-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.969945493s Feb 8 10:49:31.145: INFO: Pod "downwardapi-volume-a9942e43-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.069475706s Feb 8 10:49:33.193: INFO: Pod "downwardapi-volume-a9942e43-4a60-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.117374548s STEP: Saw pod success Feb 8 10:49:33.193: INFO: Pod "downwardapi-volume-a9942e43-4a60-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 10:49:33.208: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a9942e43-4a60-11ea-95d6-0242ac110005 container client-container: STEP: delete the pod Feb 8 10:49:33.888: INFO: Waiting for pod downwardapi-volume-a9942e43-4a60-11ea-95d6-0242ac110005 to disappear Feb 8 10:49:33.892: INFO: Pod downwardapi-volume-a9942e43-4a60-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:49:33.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-p59kf" for this suite. Feb 8 10:49:39.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:49:40.043: INFO: namespace: e2e-tests-downward-api-p59kf, resource: bindings, ignored listing per whitelist Feb 8 10:49:40.085: INFO: namespace e2e-tests-downward-api-p59kf deletion completed in 6.182606475s • [SLOW TEST:19.340 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:49:40.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-xvx22 I0208 10:49:40.280594 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-xvx22, replica count: 1 I0208 10:49:41.331167 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0208 10:49:42.331485 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0208 10:49:43.331708 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0208 10:49:44.332105 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0208 10:49:45.332397 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0208 10:49:46.332591 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0208 10:49:47.332867 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0208 10:49:48.333162 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0208 10:49:49.333488 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0208 10:49:50.333731 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0208 10:49:51.334030 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 8 10:49:51.503: INFO: Created: latency-svc-rp8v5 Feb 8 10:49:51.597: INFO: Got endpoints: latency-svc-rp8v5 [163.465394ms] Feb 8 10:49:51.695: INFO: Created: latency-svc-k4rrl Feb 8 10:49:51.776: INFO: Got endpoints: latency-svc-k4rrl [178.396798ms] Feb 8 10:49:51.828: INFO: Created: latency-svc-9cbcx Feb 8 10:49:51.842: INFO: Got endpoints: latency-svc-9cbcx [244.107837ms] Feb 8 10:49:51.982: INFO: Created: latency-svc-xzhg8 Feb 8 10:49:51.982: INFO: Got endpoints: latency-svc-xzhg8 [384.749461ms] Feb 8 10:49:52.048: INFO: Created: latency-svc-w4gvr Feb 8 10:49:52.196: INFO: Got endpoints: latency-svc-w4gvr [597.42038ms] Feb 8 10:49:52.244: INFO: Created: latency-svc-mnqxs Feb 8 10:49:52.270: INFO: Got endpoints: latency-svc-mnqxs [672.168903ms] Feb 8 10:49:52.411: INFO: Created: latency-svc-4ng9v Feb 8 10:49:52.424: INFO: Got endpoints: latency-svc-4ng9v [825.491183ms] Feb 8 10:49:52.515: INFO: Created: latency-svc-hcv6v Feb 8 10:49:52.665: INFO: Got endpoints: latency-svc-hcv6v [1.067090818s] Feb 8 10:49:52.703: INFO: Created: latency-svc-nr64x Feb 8 10:49:52.711: INFO: Got endpoints: latency-svc-nr64x [1.113168897s] Feb 8 10:49:52.898: INFO: Created: latency-svc-b6k9z Feb 8 10:49:52.920: INFO: Got endpoints: latency-svc-b6k9z [1.321396588s] Feb 8 10:49:52.977: INFO: Created: latency-svc-gx5nd Feb 8 10:49:53.083: INFO: Got endpoints: latency-svc-gx5nd [1.485422075s] Feb 8 10:49:53.112: INFO: Created: latency-svc-xrrtl Feb 8 10:49:53.145: INFO: Got endpoints: latency-svc-xrrtl [1.546748903s] Feb 8 10:49:53.275: INFO: Created: latency-svc-bsv7j Feb 8 10:49:53.314: INFO: Got endpoints: latency-svc-bsv7j [1.716152522s] Feb 8 10:49:53.344: INFO: Created: latency-svc-hwhd7 Feb 8 10:49:53.481: INFO: Got endpoints: latency-svc-hwhd7 [1.88236592s] Feb 8 10:49:53.506: INFO: Created: latency-svc-qnvfv Feb 8 10:49:53.536: INFO: Got endpoints: latency-svc-qnvfv [1.937591794s] Feb 8 10:49:53.698: INFO: Created: latency-svc-4vdkj Feb 8 10:49:53.719: INFO: Got endpoints: latency-svc-4vdkj [2.120431923s] Feb 8 10:49:53.779: INFO: Created: latency-svc-sxr28 Feb 8 10:49:53.970: INFO: Got endpoints: latency-svc-sxr28 [2.193383795s] Feb 8 10:49:54.032: INFO: Created: latency-svc-lv9q4 Feb 8 10:49:54.053: INFO: Got endpoints: latency-svc-lv9q4 [2.211110264s] Feb 8 10:49:54.339: INFO: Created: latency-svc-gmrk7 Feb 8 10:49:54.360: INFO: Got endpoints: latency-svc-gmrk7 [2.377350881s] Feb 8 10:49:54.523: INFO: Created: latency-svc-fk62l Feb 8 10:49:54.546: INFO: Got endpoints: latency-svc-fk62l [2.350445519s] Feb 8 10:49:54.717: INFO: Created: latency-svc-45qks Feb 8 10:49:54.735: INFO: Got endpoints: latency-svc-45qks [2.465139718s] Feb 8 10:49:54.800: INFO: Created: latency-svc-ps5v4 Feb 8 10:49:54.901: INFO: Got endpoints: latency-svc-ps5v4 [2.477063212s] Feb 8 10:49:54.923: INFO: Created: latency-svc-f8kmd Feb 8 10:49:54.935: INFO: Got endpoints: latency-svc-f8kmd [2.269734341s] Feb 8 10:49:54.988: INFO: Created: latency-svc-s4v8q Feb 8 10:49:55.000: INFO: Got endpoints: latency-svc-s4v8q [2.288788724s] Feb 8 10:49:55.140: INFO: Created: latency-svc-kstqt Feb 8 10:49:55.183: INFO: Got endpoints: latency-svc-kstqt [2.263113499s] Feb 8 10:49:55.346: INFO: Created: latency-svc-5kkqx Feb 8 10:49:55.371: INFO: Got endpoints: latency-svc-5kkqx [2.287560542s] Feb 8 10:49:55.457: INFO: Created: latency-svc-mtk2c Feb 8 10:49:55.653: INFO: Got endpoints: latency-svc-mtk2c [2.50844548s] Feb 8 10:49:55.690: INFO: Created: latency-svc-j28m2 Feb 8 10:49:55.724: INFO: Got endpoints: latency-svc-j28m2 [2.409633975s] Feb 8 10:49:55.846: INFO: Created: latency-svc-rlz5h Feb 8 10:49:55.880: INFO: Got endpoints: latency-svc-rlz5h [2.398679704s] Feb 8 10:49:55.942: INFO: Created: latency-svc-k5szk Feb 8 10:49:56.063: INFO: Got endpoints: latency-svc-k5szk [2.526586819s] Feb 8 10:49:56.223: INFO: Created: latency-svc-hhprh Feb 8 10:49:56.238: INFO: Got endpoints: latency-svc-hhprh [2.519377606s] Feb 8 10:49:56.290: INFO: Created: latency-svc-7h8zn Feb 8 10:49:56.301: INFO: Got endpoints: latency-svc-7h8zn [237.70176ms] Feb 8 10:49:56.413: INFO: Created: latency-svc-mkh8k Feb 8 10:49:56.429: INFO: Got endpoints: latency-svc-mkh8k [2.459132405s] Feb 8 10:49:56.690: INFO: Created: latency-svc-4lfzh Feb 8 10:49:56.724: INFO: Got endpoints: latency-svc-4lfzh [2.67051839s] Feb 8 10:49:56.902: INFO: Created: latency-svc-kdz95 Feb 8 10:49:56.903: INFO: Got endpoints: latency-svc-kdz95 [2.542622422s] Feb 8 10:49:56.947: INFO: Created: latency-svc-d2gk2 Feb 8 10:49:57.071: INFO: Got endpoints: latency-svc-d2gk2 [2.523975167s] Feb 8 10:49:57.094: INFO: Created: latency-svc-ggv52 Feb 8 10:49:57.112: INFO: Got endpoints: latency-svc-ggv52 [2.376983369s] Feb 8 10:49:57.291: INFO: Created: latency-svc-cvgmg Feb 8 10:49:57.307: INFO: Got endpoints: latency-svc-cvgmg [2.406243213s] Feb 8 10:49:57.460: INFO: Created: latency-svc-nr2lr Feb 8 10:49:57.486: INFO: Got endpoints: latency-svc-nr2lr [2.551611322s] Feb 8 10:49:57.550: INFO: Created: latency-svc-p5sk8 Feb 8 10:49:57.676: INFO: Got endpoints: latency-svc-p5sk8 [2.675732839s] Feb 8 10:49:57.699: INFO: Created: latency-svc-pg9x2 Feb 8 10:49:57.715: INFO: Got endpoints: latency-svc-pg9x2 [2.532152463s] Feb 8 10:49:57.758: INFO: Created: latency-svc-czlbv Feb 8 10:49:57.895: INFO: Got endpoints: latency-svc-czlbv [2.523876774s] Feb 8 10:49:57.956: INFO: Created: latency-svc-2bdfb Feb 8 10:49:57.973: INFO: Got endpoints: latency-svc-2bdfb [2.319406596s] Feb 8 10:49:58.117: INFO: Created: latency-svc-jqjl7 Feb 8 10:49:58.149: INFO: Got endpoints: latency-svc-jqjl7 [2.42445633s] Feb 8 10:49:58.265: INFO: Created: latency-svc-qbkhm Feb 8 10:49:58.291: INFO: Got endpoints: latency-svc-qbkhm [2.410747511s] Feb 8 10:49:58.368: INFO: Created: latency-svc-gtg88 Feb 8 10:49:58.475: INFO: Got endpoints: latency-svc-gtg88 [2.23593777s] Feb 8 10:49:58.678: INFO: Created: latency-svc-zswll Feb 8 10:49:58.725: INFO: Got endpoints: latency-svc-zswll [2.423920632s] Feb 8 10:49:58.739: INFO: Created: latency-svc-t4dkf Feb 8 10:49:58.806: INFO: Got endpoints: latency-svc-t4dkf [2.376865413s] Feb 8 10:49:58.834: INFO: Created: latency-svc-g5mnn Feb 8 10:49:58.857: INFO: Got endpoints: latency-svc-g5mnn [2.132033099s] Feb 8 10:49:59.013: INFO: Created: latency-svc-xw7n6 Feb 8 10:49:59.022: INFO: Got endpoints: latency-svc-xw7n6 [2.119363898s] Feb 8 10:49:59.040: INFO: Created: latency-svc-mfm4d Feb 8 10:49:59.058: INFO: Got endpoints: latency-svc-mfm4d [1.987175038s] Feb 8 10:49:59.195: INFO: Created: latency-svc-lxf72 Feb 8 10:49:59.226: INFO: Got endpoints: latency-svc-lxf72 [2.113139804s] Feb 8 10:49:59.402: INFO: Created: latency-svc-l7sxg Feb 8 10:49:59.425: INFO: Got endpoints: latency-svc-l7sxg [2.117620794s] Feb 8 10:49:59.436: INFO: Created: latency-svc-mkc66 Feb 8 10:49:59.527: INFO: Got endpoints: latency-svc-mkc66 [2.040353129s] Feb 8 10:49:59.577: INFO: Created: latency-svc-k5z5v Feb 8 10:49:59.732: INFO: Created: latency-svc-dhs88 Feb 8 10:49:59.759: INFO: Got endpoints: latency-svc-dhs88 [2.044299037s] Feb 8 10:49:59.760: INFO: Got endpoints: latency-svc-k5z5v [2.083255244s] Feb 8 10:49:59.800: INFO: Created: latency-svc-5nm7q Feb 8 10:49:59.820: INFO: Got endpoints: latency-svc-5nm7q [1.924978993s] Feb 8 10:49:59.915: INFO: Created: latency-svc-xbp8c Feb 8 10:49:59.932: INFO: Got endpoints: latency-svc-xbp8c [1.95894218s] Feb 8 10:49:59.981: INFO: Created: latency-svc-vscc9 Feb 8 10:50:00.055: INFO: Got endpoints: latency-svc-vscc9 [1.9058084s] Feb 8 10:50:00.343: INFO: Created: latency-svc-wwxwg Feb 8 10:50:00.378: INFO: Got endpoints: latency-svc-wwxwg [2.087648347s] Feb 8 10:50:00.519: INFO: Created: latency-svc-tztw8 Feb 8 10:50:00.532: INFO: Got endpoints: latency-svc-tztw8 [2.057284256s] Feb 8 10:50:00.787: INFO: Created: latency-svc-9rhwx Feb 8 10:50:00.787: INFO: Got endpoints: latency-svc-9rhwx [2.062052106s] Feb 8 10:50:01.027: INFO: Created: latency-svc-v45mr Feb 8 10:50:01.064: INFO: Got endpoints: latency-svc-v45mr [2.257695876s] Feb 8 10:50:01.253: INFO: Created: latency-svc-cvqh8 Feb 8 10:50:01.286: INFO: Got endpoints: latency-svc-cvqh8 [2.42916578s] Feb 8 10:50:01.495: INFO: Created: latency-svc-tfp44 Feb 8 10:50:01.528: INFO: Got endpoints: latency-svc-tfp44 [2.505389177s] Feb 8 10:50:01.705: INFO: Created: latency-svc-jkff6 Feb 8 10:50:01.802: INFO: Created: latency-svc-mlhgs Feb 8 10:50:01.810: INFO: Got endpoints: latency-svc-jkff6 [2.751476582s] Feb 8 10:50:01.896: INFO: Got endpoints: latency-svc-mlhgs [2.670715757s] Feb 8 10:50:01.941: INFO: Created: latency-svc-qmqx6 Feb 8 10:50:01.954: INFO: Got endpoints: latency-svc-qmqx6 [2.529261926s] Feb 8 10:50:02.094: INFO: Created: latency-svc-wv5lt Feb 8 10:50:02.112: INFO: Got endpoints: latency-svc-wv5lt [2.584691011s] Feb 8 10:50:02.349: INFO: Created: latency-svc-t4qpj Feb 8 10:50:02.378: INFO: Got endpoints: latency-svc-t4qpj [2.618647872s] Feb 8 10:50:02.543: INFO: Created: latency-svc-nptj2 Feb 8 10:50:02.612: INFO: Got endpoints: latency-svc-nptj2 [2.852398906s] Feb 8 10:50:02.739: INFO: Created: latency-svc-62f6n Feb 8 10:50:02.763: INFO: Got endpoints: latency-svc-62f6n [2.942645181s] Feb 8 10:50:02.936: INFO: Created: latency-svc-knw5w Feb 8 10:50:02.940: INFO: Got endpoints: latency-svc-knw5w [3.006876238s] Feb 8 10:50:02.978: INFO: Created: latency-svc-7wfsc Feb 8 10:50:03.077: INFO: Got endpoints: latency-svc-7wfsc [3.021615539s] Feb 8 10:50:03.098: INFO: Created: latency-svc-6kc59 Feb 8 10:50:03.117: INFO: Got endpoints: latency-svc-6kc59 [2.738314003s] Feb 8 10:50:03.255: INFO: Created: latency-svc-ktxtm Feb 8 10:50:03.266: INFO: Got endpoints: latency-svc-ktxtm [2.734132629s] Feb 8 10:50:03.319: INFO: Created: latency-svc-6q2vd Feb 8 10:50:03.430: INFO: Got endpoints: latency-svc-6q2vd [2.643227585s] Feb 8 10:50:03.485: INFO: Created: latency-svc-vxk7w Feb 8 10:50:03.495: INFO: Got endpoints: latency-svc-vxk7w [2.430552857s] Feb 8 10:50:03.623: INFO: Created: latency-svc-lpfzm Feb 8 10:50:03.684: INFO: Got endpoints: latency-svc-lpfzm [2.397965429s] Feb 8 10:50:03.797: INFO: Created: latency-svc-g2kc6 Feb 8 10:50:03.821: INFO: Got endpoints: latency-svc-g2kc6 [2.292553003s] Feb 8 10:50:03.992: INFO: Created: latency-svc-8vqps Feb 8 10:50:04.054: INFO: Got endpoints: latency-svc-8vqps [2.244675861s] Feb 8 10:50:04.287: INFO: Created: latency-svc-kmd5s Feb 8 10:50:04.333: INFO: Got endpoints: latency-svc-kmd5s [2.436554571s] Feb 8 10:50:04.463: INFO: Created: latency-svc-mwhhg Feb 8 10:50:04.566: INFO: Got endpoints: latency-svc-mwhhg [2.611041793s] Feb 8 10:50:04.787: INFO: Created: latency-svc-xhmqk Feb 8 10:50:04.934: INFO: Got endpoints: latency-svc-xhmqk [2.821783388s] Feb 8 10:50:04.992: INFO: Created: latency-svc-l4tk7 Feb 8 10:50:05.107: INFO: Got endpoints: latency-svc-l4tk7 [2.728116259s] Feb 8 10:50:05.118: INFO: Created: latency-svc-f4jk7 Feb 8 10:50:05.144: INFO: Got endpoints: latency-svc-f4jk7 [2.531474802s] Feb 8 10:50:05.264: INFO: Created: latency-svc-8znqm Feb 8 10:50:05.291: INFO: Got endpoints: latency-svc-8znqm [2.527225013s] Feb 8 10:50:05.362: INFO: Created: latency-svc-mzsw2 Feb 8 10:50:05.471: INFO: Got endpoints: latency-svc-mzsw2 [2.531761721s] Feb 8 10:50:05.494: INFO: Created: latency-svc-6wbjb Feb 8 10:50:05.516: INFO: Got endpoints: latency-svc-6wbjb [2.439324151s] Feb 8 10:50:05.663: INFO: Created: latency-svc-zdllj Feb 8 10:50:05.693: INFO: Got endpoints: latency-svc-zdllj [2.575176524s] Feb 8 10:50:05.779: INFO: Created: latency-svc-f4xd2 Feb 8 10:50:05.876: INFO: Got endpoints: latency-svc-f4xd2 [2.609554578s] Feb 8 10:50:05.909: INFO: Created: latency-svc-592m6 Feb 8 10:50:05.910: INFO: Got endpoints: latency-svc-592m6 [2.479363218s] Feb 8 10:50:05.973: INFO: Created: latency-svc-sb7dq Feb 8 10:50:06.160: INFO: Got endpoints: latency-svc-sb7dq [2.665659711s] Feb 8 10:50:06.321: INFO: Created: latency-svc-gw9cv Feb 8 10:50:06.772: INFO: Got endpoints: latency-svc-gw9cv [3.087608736s] Feb 8 10:50:06.986: INFO: Created: latency-svc-6p8g6 Feb 8 10:50:07.015: INFO: Got endpoints: latency-svc-6p8g6 [3.193555266s] Feb 8 10:50:07.066: INFO: Created: latency-svc-l86qj Feb 8 10:50:07.198: INFO: Got endpoints: latency-svc-l86qj [3.142967766s] Feb 8 10:50:07.257: INFO: Created: latency-svc-bc887 Feb 8 10:50:07.264: INFO: Got endpoints: latency-svc-bc887 [2.930656868s] Feb 8 10:50:07.379: INFO: Created: latency-svc-zl9wx Feb 8 10:50:07.394: INFO: Got endpoints: latency-svc-zl9wx [2.828191741s] Feb 8 10:50:07.428: INFO: Created: latency-svc-cjmfd Feb 8 10:50:07.554: INFO: Got endpoints: latency-svc-cjmfd [2.620278869s] Feb 8 10:50:07.589: INFO: Created: latency-svc-968s4 Feb 8 10:50:07.595: INFO: Got endpoints: latency-svc-968s4 [2.488650031s] Feb 8 10:50:07.693: INFO: Created: latency-svc-tv7ht Feb 8 10:50:07.796: INFO: Got endpoints: latency-svc-tv7ht [2.652445486s] Feb 8 10:50:07.861: INFO: Created: latency-svc-54jl6 Feb 8 10:50:07.883: INFO: Got endpoints: latency-svc-54jl6 [2.592770685s] Feb 8 10:50:08.080: INFO: Created: latency-svc-brnjx Feb 8 10:50:08.104: INFO: Got endpoints: latency-svc-brnjx [2.632904869s] Feb 8 10:50:08.225: INFO: Created: latency-svc-zhf8h Feb 8 10:50:08.261: INFO: Got endpoints: latency-svc-zhf8h [2.744996668s] Feb 8 10:50:08.313: INFO: Created: latency-svc-9vbmw Feb 8 10:50:08.420: INFO: Got endpoints: latency-svc-9vbmw [2.727064273s] Feb 8 10:50:08.542: INFO: Created: latency-svc-jrxp6 Feb 8 10:50:08.663: INFO: Got endpoints: latency-svc-jrxp6 [2.786813171s] Feb 8 10:50:08.791: INFO: Created: latency-svc-q5k8t Feb 8 10:50:08.921: INFO: Got endpoints: latency-svc-q5k8t [3.011321975s] Feb 8 10:50:08.962: INFO: Created: latency-svc-5p7fq Feb 8 10:50:08.991: INFO: Got endpoints: latency-svc-5p7fq [2.830465693s] Feb 8 10:50:09.130: INFO: Created: latency-svc-hgtw7 Feb 8 10:50:09.178: INFO: Got endpoints: latency-svc-hgtw7 [2.40587538s] Feb 8 10:50:09.317: INFO: Created: latency-svc-8k2c6 Feb 8 10:50:09.331: INFO: Got endpoints: latency-svc-8k2c6 [2.315693259s] Feb 8 10:50:09.393: INFO: Created: latency-svc-b526h Feb 8 10:50:09.493: INFO: Got endpoints: latency-svc-b526h [2.295630757s] Feb 8 10:50:09.513: INFO: Created: latency-svc-jb2x7 Feb 8 10:50:09.539: INFO: Got endpoints: latency-svc-jb2x7 [2.27509867s] Feb 8 10:50:09.663: INFO: Created: latency-svc-gwtmg Feb 8 10:50:09.687: INFO: Got endpoints: latency-svc-gwtmg [2.29240882s] Feb 8 10:50:09.817: INFO: Created: latency-svc-xsg7m Feb 8 10:50:09.848: INFO: Got endpoints: latency-svc-xsg7m [2.293486829s] Feb 8 10:50:09.994: INFO: Created: latency-svc-jkk86 Feb 8 10:50:10.004: INFO: Got endpoints: latency-svc-jkk86 [2.408523429s] Feb 8 10:50:10.087: INFO: Created: latency-svc-xnmmv Feb 8 10:50:10.140: INFO: Created: latency-svc-nhxkj Feb 8 10:50:10.142: INFO: Got endpoints: latency-svc-xnmmv [2.345367826s] Feb 8 10:50:10.167: INFO: Got endpoints: latency-svc-nhxkj [2.283448177s] Feb 8 10:50:10.318: INFO: Created: latency-svc-pt6vj Feb 8 10:50:10.348: INFO: Got endpoints: latency-svc-pt6vj [2.242870409s] Feb 8 10:50:10.406: INFO: Created: latency-svc-c6cvt Feb 8 10:50:10.507: INFO: Got endpoints: latency-svc-c6cvt [2.245107977s] Feb 8 10:50:10.541: INFO: Created: latency-svc-kbmpr Feb 8 10:50:10.583: INFO: Got endpoints: latency-svc-kbmpr [2.162855759s] Feb 8 10:50:10.721: INFO: Created: latency-svc-gpvqj Feb 8 10:50:10.777: INFO: Got endpoints: latency-svc-gpvqj [2.113430865s] Feb 8 10:50:10.777: INFO: Created: latency-svc-j6wjt Feb 8 10:50:10.965: INFO: Got endpoints: latency-svc-j6wjt [2.043601988s] Feb 8 10:50:11.006: INFO: Created: latency-svc-7n72j Feb 8 10:50:11.024: INFO: Got endpoints: latency-svc-7n72j [2.0326419s] Feb 8 10:50:11.134: INFO: Created: latency-svc-st9c4 Feb 8 10:50:11.166: INFO: Got endpoints: latency-svc-st9c4 [1.988265218s] Feb 8 10:50:11.236: INFO: Created: latency-svc-95sls Feb 8 10:50:11.237: INFO: Got endpoints: latency-svc-95sls [1.90533546s] Feb 8 10:50:11.346: INFO: Created: latency-svc-c4n5h Feb 8 10:50:11.397: INFO: Created: latency-svc-rkxcp Feb 8 10:50:11.410: INFO: Got endpoints: latency-svc-c4n5h [1.915947679s] Feb 8 10:50:11.505: INFO: Got endpoints: latency-svc-rkxcp [1.965843782s] Feb 8 10:50:11.521: INFO: Created: latency-svc-9wbtp Feb 8 10:50:11.537: INFO: Got endpoints: latency-svc-9wbtp [1.850150037s] Feb 8 10:50:11.584: INFO: Created: latency-svc-bjszx Feb 8 10:50:11.695: INFO: Got endpoints: latency-svc-bjszx [1.847296326s] Feb 8 10:50:11.751: INFO: Created: latency-svc-zglxv Feb 8 10:50:11.985: INFO: Created: latency-svc-xpt5q Feb 8 10:50:12.006: INFO: Got endpoints: latency-svc-zglxv [2.002374781s] Feb 8 10:50:12.174: INFO: Created: latency-svc-4l8c8 Feb 8 10:50:12.221: INFO: Got endpoints: latency-svc-xpt5q [2.079635242s] Feb 8 10:50:12.378: INFO: Got endpoints: latency-svc-4l8c8 [2.210718925s] Feb 8 10:50:12.403: INFO: Created: latency-svc-gd45t Feb 8 10:50:12.403: INFO: Got endpoints: latency-svc-gd45t [2.055220142s] Feb 8 10:50:12.660: INFO: Created: latency-svc-6k782 Feb 8 10:50:12.687: INFO: Got endpoints: latency-svc-6k782 [2.18025559s] Feb 8 10:50:12.738: INFO: Created: latency-svc-jl5hx Feb 8 10:50:12.861: INFO: Got endpoints: latency-svc-jl5hx [2.277413161s] Feb 8 10:50:13.023: INFO: Created: latency-svc-2lxc8 Feb 8 10:50:13.040: INFO: Got endpoints: latency-svc-2lxc8 [2.263066829s] Feb 8 10:50:13.108: INFO: Created: latency-svc-f7sbw Feb 8 10:50:13.332: INFO: Got endpoints: latency-svc-f7sbw [2.367113527s] Feb 8 10:50:13.337: INFO: Created: latency-svc-nb555 Feb 8 10:50:13.349: INFO: Got endpoints: latency-svc-nb555 [2.325002222s] Feb 8 10:50:13.561: INFO: Created: latency-svc-tbkx6 Feb 8 10:50:13.594: INFO: Got endpoints: latency-svc-tbkx6 [2.427632039s] Feb 8 10:50:13.793: INFO: Created: latency-svc-v88kg Feb 8 10:50:13.940: INFO: Got endpoints: latency-svc-v88kg [2.703420645s] Feb 8 10:50:13.960: INFO: Created: latency-svc-2cs4z Feb 8 10:50:13.973: INFO: Got endpoints: latency-svc-2cs4z [2.563644174s] Feb 8 10:50:14.149: INFO: Created: latency-svc-dsxz2 Feb 8 10:50:14.179: INFO: Got endpoints: latency-svc-dsxz2 [2.673289548s] Feb 8 10:50:14.371: INFO: Created: latency-svc-cckg2 Feb 8 10:50:14.384: INFO: Got endpoints: latency-svc-cckg2 [2.847397071s] Feb 8 10:50:14.463: INFO: Created: latency-svc-m7gb4 Feb 8 10:50:14.542: INFO: Got endpoints: latency-svc-m7gb4 [2.846376825s] Feb 8 10:50:14.642: INFO: Created: latency-svc-c8hxq Feb 8 10:50:14.814: INFO: Got endpoints: latency-svc-c8hxq [2.80752038s] Feb 8 10:50:15.318: INFO: Created: latency-svc-lx464 Feb 8 10:50:15.325: INFO: Got endpoints: latency-svc-lx464 [3.103609064s] Feb 8 10:50:15.731: INFO: Created: latency-svc-k7g4z Feb 8 10:50:15.771: INFO: Got endpoints: latency-svc-k7g4z [3.392362801s] Feb 8 10:50:15.995: INFO: Created: latency-svc-rmwt7 Feb 8 10:50:16.072: INFO: Got endpoints: latency-svc-rmwt7 [3.669403995s] Feb 8 10:50:16.356: INFO: Created: latency-svc-9dbbq Feb 8 10:50:16.451: INFO: Got endpoints: latency-svc-9dbbq [3.764093669s] Feb 8 10:50:16.738: INFO: Created: latency-svc-rxq7d Feb 8 10:50:16.776: INFO: Got endpoints: latency-svc-rxq7d [3.915393642s] Feb 8 10:50:16.925: INFO: Created: latency-svc-vbb4v Feb 8 10:50:17.014: INFO: Got endpoints: latency-svc-vbb4v [3.974021731s] Feb 8 10:50:17.171: INFO: Created: latency-svc-44mc9 Feb 8 10:50:17.251: INFO: Got endpoints: latency-svc-44mc9 [3.91924283s] Feb 8 10:50:17.764: INFO: Created: latency-svc-gtr2b Feb 8 10:50:17.764: INFO: Got endpoints: latency-svc-gtr2b [4.415134927s] Feb 8 10:50:18.026: INFO: Created: latency-svc-fstjt Feb 8 10:50:18.054: INFO: Got endpoints: latency-svc-fstjt [4.460428831s] Feb 8 10:50:18.312: INFO: Created: latency-svc-czkdl Feb 8 10:50:18.450: INFO: Got endpoints: latency-svc-czkdl [4.509594577s] Feb 8 10:50:18.456: INFO: Created: latency-svc-7stqv Feb 8 10:50:18.474: INFO: Got endpoints: latency-svc-7stqv [4.500364746s] Feb 8 10:50:18.631: INFO: Created: latency-svc-lpxn8 Feb 8 10:50:18.664: INFO: Got endpoints: latency-svc-lpxn8 [4.485161529s] Feb 8 10:50:18.783: INFO: Created: latency-svc-zb9rj Feb 8 10:50:18.801: INFO: Got endpoints: latency-svc-zb9rj [4.41647933s] Feb 8 10:50:19.750: INFO: Created: latency-svc-kmkbl Feb 8 10:50:19.780: INFO: Got endpoints: latency-svc-kmkbl [5.238065199s] Feb 8 10:50:19.844: INFO: Created: latency-svc-mtq9m Feb 8 10:50:19.997: INFO: Got endpoints: latency-svc-mtq9m [5.182606658s] Feb 8 10:50:20.025: INFO: Created: latency-svc-dhc6c Feb 8 10:50:20.051: INFO: Got endpoints: latency-svc-dhc6c [4.725698269s] Feb 8 10:50:20.229: INFO: Created: latency-svc-97xg2 Feb 8 10:50:20.251: INFO: Got endpoints: latency-svc-97xg2 [4.480637451s] Feb 8 10:50:20.449: INFO: Created: latency-svc-hbmlb Feb 8 10:50:20.476: INFO: Got endpoints: latency-svc-hbmlb [4.403744979s] Feb 8 10:50:20.651: INFO: Created: latency-svc-948pc Feb 8 10:50:20.858: INFO: Got endpoints: latency-svc-948pc [4.406913345s] Feb 8 10:50:21.021: INFO: Created: latency-svc-g9pls Feb 8 10:50:21.033: INFO: Got endpoints: latency-svc-g9pls [4.256218189s] Feb 8 10:50:21.222: INFO: Created: latency-svc-xzvdw Feb 8 10:50:21.260: INFO: Got endpoints: latency-svc-xzvdw [4.246147843s] Feb 8 10:50:21.416: INFO: Created: latency-svc-vjlm6 Feb 8 10:50:21.416: INFO: Got endpoints: latency-svc-vjlm6 [4.164596021s] Feb 8 10:50:21.479: INFO: Created: latency-svc-6nwnj Feb 8 10:50:21.668: INFO: Got endpoints: latency-svc-6nwnj [3.903518296s] Feb 8 10:50:21.701: INFO: Created: latency-svc-cbzq2 Feb 8 10:50:21.754: INFO: Got endpoints: latency-svc-cbzq2 [3.699720375s] Feb 8 10:50:21.928: INFO: Created: latency-svc-mdllv Feb 8 10:50:21.931: INFO: Got endpoints: latency-svc-mdllv [3.481199361s] Feb 8 10:50:22.079: INFO: Created: latency-svc-vtvhf Feb 8 10:50:22.088: INFO: Got endpoints: latency-svc-vtvhf [3.613679893s] Feb 8 10:50:22.250: INFO: Created: latency-svc-l9bb9 Feb 8 10:50:22.384: INFO: Got endpoints: latency-svc-l9bb9 [3.720125177s] Feb 8 10:50:22.408: INFO: Created: latency-svc-wn86q Feb 8 10:50:22.415: INFO: Got endpoints: latency-svc-wn86q [3.614152452s] Feb 8 10:50:22.644: INFO: Created: latency-svc-mmzfq Feb 8 10:50:22.657: INFO: Got endpoints: latency-svc-mmzfq [2.876701125s] Feb 8 10:50:22.717: INFO: Created: latency-svc-wrwlh Feb 8 10:50:22.783: INFO: Got endpoints: latency-svc-wrwlh [2.786154791s] Feb 8 10:50:22.836: INFO: Created: latency-svc-p9j9w Feb 8 10:50:22.843: INFO: Got endpoints: latency-svc-p9j9w [2.792065136s] Feb 8 10:50:23.035: INFO: Created: latency-svc-nlkff Feb 8 10:50:23.085: INFO: Got endpoints: latency-svc-nlkff [2.833656399s] Feb 8 10:50:23.217: INFO: Created: latency-svc-pzz75 Feb 8 10:50:23.247: INFO: Got endpoints: latency-svc-pzz75 [2.770501492s] Feb 8 10:50:23.381: INFO: Created: latency-svc-6vhth Feb 8 10:50:23.394: INFO: Got endpoints: latency-svc-6vhth [2.535751591s] Feb 8 10:50:23.452: INFO: Created: latency-svc-2mwxb Feb 8 10:50:23.551: INFO: Got endpoints: latency-svc-2mwxb [2.517932915s] Feb 8 10:50:23.596: INFO: Created: latency-svc-8ltcs Feb 8 10:50:23.618: INFO: Got endpoints: latency-svc-8ltcs [2.357167589s] Feb 8 10:50:23.776: INFO: Created: latency-svc-k5l2w Feb 8 10:50:23.817: INFO: Got endpoints: latency-svc-k5l2w [2.40052344s] Feb 8 10:50:23.993: INFO: Created: latency-svc-vfhh8 Feb 8 10:50:24.022: INFO: Got endpoints: latency-svc-vfhh8 [2.353576426s] Feb 8 10:50:24.224: INFO: Created: latency-svc-cqmj2 Feb 8 10:50:24.224: INFO: Got endpoints: latency-svc-cqmj2 [2.469395341s] Feb 8 10:50:24.347: INFO: Created: latency-svc-dqrtb Feb 8 10:50:24.396: INFO: Got endpoints: latency-svc-dqrtb [2.464583812s] Feb 8 10:50:24.396: INFO: Created: latency-svc-qzz6d Feb 8 10:50:24.510: INFO: Got endpoints: latency-svc-qzz6d [2.422228015s] Feb 8 10:50:24.563: INFO: Created: latency-svc-gx7vb Feb 8 10:50:24.571: INFO: Got endpoints: latency-svc-gx7vb [2.185916567s] Feb 8 10:50:24.680: INFO: Created: latency-svc-rhwsr Feb 8 10:50:24.742: INFO: Created: latency-svc-plzm2 Feb 8 10:50:24.747: INFO: Got endpoints: latency-svc-rhwsr [2.331622514s] Feb 8 10:50:24.911: INFO: Got endpoints: latency-svc-plzm2 [2.25357187s] Feb 8 10:50:24.930: INFO: Created: latency-svc-xssm9 Feb 8 10:50:24.983: INFO: Got endpoints: latency-svc-xssm9 [2.199710444s] Feb 8 10:50:25.079: INFO: Created: latency-svc-j2sqk Feb 8 10:50:25.104: INFO: Got endpoints: latency-svc-j2sqk [2.26045288s] Feb 8 10:50:25.285: INFO: Created: latency-svc-v86gk Feb 8 10:50:25.307: INFO: Got endpoints: latency-svc-v86gk [2.221586524s] Feb 8 10:50:25.378: INFO: Created: latency-svc-blqk2 Feb 8 10:50:25.465: INFO: Got endpoints: latency-svc-blqk2 [2.218346089s] Feb 8 10:50:25.496: INFO: Created: latency-svc-gmx25 Feb 8 10:50:25.517: INFO: Got endpoints: latency-svc-gmx25 [2.122391831s] Feb 8 10:50:25.673: INFO: Created: latency-svc-fdrcm Feb 8 10:50:25.681: INFO: Got endpoints: latency-svc-fdrcm [2.129834726s] Feb 8 10:50:25.813: INFO: Created: latency-svc-ntm8p Feb 8 10:50:25.843: INFO: Got endpoints: latency-svc-ntm8p [2.225582052s] Feb 8 10:50:25.911: INFO: Created: latency-svc-zf67m Feb 8 10:50:25.991: INFO: Got endpoints: latency-svc-zf67m [2.174251773s] Feb 8 10:50:26.000: INFO: Created: latency-svc-nrj8w Feb 8 10:50:26.043: INFO: Got endpoints: latency-svc-nrj8w [2.021465596s] Feb 8 10:50:26.165: INFO: Created: latency-svc-vbj62 Feb 8 10:50:26.184: INFO: Got endpoints: latency-svc-vbj62 [1.960630646s] Feb 8 10:50:26.297: INFO: Created: latency-svc-kz7rb Feb 8 10:50:26.373: INFO: Got endpoints: latency-svc-kz7rb [1.976962763s] Feb 8 10:50:27.078: INFO: Created: latency-svc-trvnr Feb 8 10:50:27.349: INFO: Got endpoints: latency-svc-trvnr [2.838544865s] Feb 8 10:50:27.349: INFO: Latencies: [178.396798ms 237.70176ms 244.107837ms 384.749461ms 597.42038ms 672.168903ms 825.491183ms 1.067090818s 1.113168897s 1.321396588s 1.485422075s 1.546748903s 1.716152522s 1.847296326s 1.850150037s 1.88236592s 1.90533546s 1.9058084s 1.915947679s 1.924978993s 1.937591794s 1.95894218s 1.960630646s 1.965843782s 1.976962763s 1.987175038s 1.988265218s 2.002374781s 2.021465596s 2.0326419s 2.040353129s 2.043601988s 2.044299037s 2.055220142s 2.057284256s 2.062052106s 2.079635242s 2.083255244s 2.087648347s 2.113139804s 2.113430865s 2.117620794s 2.119363898s 2.120431923s 2.122391831s 2.129834726s 2.132033099s 2.162855759s 2.174251773s 2.18025559s 2.185916567s 2.193383795s 2.199710444s 2.210718925s 2.211110264s 2.218346089s 2.221586524s 2.225582052s 2.23593777s 2.242870409s 2.244675861s 2.245107977s 2.25357187s 2.257695876s 2.26045288s 2.263066829s 2.263113499s 2.269734341s 2.27509867s 2.277413161s 2.283448177s 2.287560542s 2.288788724s 2.29240882s 2.292553003s 2.293486829s 2.295630757s 2.315693259s 2.319406596s 2.325002222s 2.331622514s 2.345367826s 2.350445519s 2.353576426s 2.357167589s 2.367113527s 2.376865413s 2.376983369s 2.377350881s 2.397965429s 2.398679704s 2.40052344s 2.40587538s 2.406243213s 2.408523429s 2.409633975s 2.410747511s 2.422228015s 2.423920632s 2.42445633s 2.427632039s 2.42916578s 2.430552857s 2.436554571s 2.439324151s 2.459132405s 2.464583812s 2.465139718s 2.469395341s 2.477063212s 2.479363218s 2.488650031s 2.505389177s 2.50844548s 2.517932915s 2.519377606s 2.523876774s 2.523975167s 2.526586819s 2.527225013s 2.529261926s 2.531474802s 2.531761721s 2.532152463s 2.535751591s 2.542622422s 2.551611322s 2.563644174s 2.575176524s 2.584691011s 2.592770685s 2.609554578s 2.611041793s 2.618647872s 2.620278869s 2.632904869s 2.643227585s 2.652445486s 2.665659711s 2.67051839s 2.670715757s 2.673289548s 2.675732839s 2.703420645s 2.727064273s 2.728116259s 2.734132629s 2.738314003s 2.744996668s 2.751476582s 2.770501492s 2.786154791s 2.786813171s 2.792065136s 2.80752038s 2.821783388s 2.828191741s 2.830465693s 2.833656399s 2.838544865s 2.846376825s 2.847397071s 2.852398906s 2.876701125s 2.930656868s 2.942645181s 3.006876238s 3.011321975s 3.021615539s 3.087608736s 3.103609064s 3.142967766s 3.193555266s 3.392362801s 3.481199361s 3.613679893s 3.614152452s 3.669403995s 3.699720375s 3.720125177s 3.764093669s 3.903518296s 3.915393642s 3.91924283s 3.974021731s 4.164596021s 4.246147843s 4.256218189s 4.403744979s 4.406913345s 4.415134927s 4.41647933s 4.460428831s 4.480637451s 4.485161529s 4.500364746s 4.509594577s 4.725698269s 5.182606658s 5.238065199s] Feb 8 10:50:27.349: INFO: 50 %ile: 2.427632039s Feb 8 10:50:27.349: INFO: 90 %ile: 3.764093669s Feb 8 10:50:27.349: INFO: 99 %ile: 5.182606658s Feb 8 10:50:27.349: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:50:27.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-xvx22" for this suite. Feb 8 10:51:21.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:51:21.663: INFO: namespace: e2e-tests-svc-latency-xvx22, resource: bindings, ignored listing per whitelist Feb 8 10:51:21.678: INFO: namespace e2e-tests-svc-latency-xvx22 deletion completed in 54.2532337s • [SLOW TEST:101.593 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:51:21.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f19e1837-4a60-11ea-95d6-0242ac110005 STEP: Creating a pod to test consume secrets Feb 8 10:51:21.961: INFO: Waiting up to 5m0s for pod "pod-secrets-f1a1782e-4a60-11ea-95d6-0242ac110005" in namespace "e2e-tests-secrets-sflpq" to be "success or failure" Feb 8 10:51:21.986: INFO: Pod "pod-secrets-f1a1782e-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.192708ms Feb 8 10:51:24.204: INFO: Pod "pod-secrets-f1a1782e-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.24294038s Feb 8 10:51:26.365: INFO: Pod "pod-secrets-f1a1782e-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.403811802s Feb 8 10:51:28.483: INFO: Pod "pod-secrets-f1a1782e-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.521590919s Feb 8 10:51:30.812: INFO: Pod "pod-secrets-f1a1782e-4a60-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.850572196s Feb 8 10:51:32.826: INFO: Pod "pod-secrets-f1a1782e-4a60-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.864457196s STEP: Saw pod success Feb 8 10:51:32.826: INFO: Pod "pod-secrets-f1a1782e-4a60-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 10:51:32.831: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-f1a1782e-4a60-11ea-95d6-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 8 10:51:33.011: INFO: Waiting for pod pod-secrets-f1a1782e-4a60-11ea-95d6-0242ac110005 to disappear Feb 8 10:51:33.022: INFO: Pod pod-secrets-f1a1782e-4a60-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:51:33.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-sflpq" for this suite. Feb 8 10:51:39.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:51:39.202: INFO: namespace: e2e-tests-secrets-sflpq, resource: bindings, ignored listing per whitelist Feb 8 10:51:39.219: INFO: namespace e2e-tests-secrets-sflpq deletion completed in 6.185506159s • [SLOW TEST:17.541 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:51:39.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:51:51.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-n97cc" for this suite. Feb 8 10:51:57.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:51:57.729: INFO: namespace: e2e-tests-kubelet-test-n97cc, resource: bindings, ignored listing per whitelist Feb 8 10:51:57.746: INFO: namespace e2e-tests-kubelet-test-n97cc deletion completed in 6.168955327s • [SLOW TEST:18.526 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:51:57.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 8 10:51:57.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Feb 8 10:51:58.032: INFO: stderr: "" Feb 8 10:51:58.032: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Feb 8 10:51:58.037: INFO: Not supported for server versions before "1.13.12" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:51:58.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xc4bl" for this suite. Feb 8 10:52:04.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:52:04.149: INFO: namespace: e2e-tests-kubectl-xc4bl, resource: bindings, ignored listing per whitelist Feb 8 10:52:04.239: INFO: namespace e2e-tests-kubectl-xc4bl deletion completed in 6.191334921s S [SKIPPING] [6.494 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 8 10:51:58.037: Not supported for server versions before "1.13.12" /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:52:04.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:52:04.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-76nh5" for this suite. Feb 8 10:52:10.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:52:10.974: INFO: namespace: e2e-tests-kubelet-test-76nh5, resource: bindings, ignored listing per whitelist Feb 8 10:52:10.980: INFO: namespace e2e-tests-kubelet-test-76nh5 deletion completed in 6.286461235s • [SLOW TEST:6.741 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:52:10.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-8wtb STEP: Creating a pod to test atomic-volume-subpath Feb 8 10:52:11.201: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8wtb" in namespace "e2e-tests-subpath-sdhkd" to be "success or failure" Feb 8 10:52:11.217: INFO: Pod "pod-subpath-test-configmap-8wtb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.401961ms Feb 8 10:52:13.246: INFO: Pod "pod-subpath-test-configmap-8wtb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045065134s Feb 8 10:52:15.276: INFO: Pod "pod-subpath-test-configmap-8wtb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075228251s Feb 8 10:52:17.468: INFO: Pod "pod-subpath-test-configmap-8wtb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.267060379s Feb 8 10:52:19.494: INFO: Pod "pod-subpath-test-configmap-8wtb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.293541963s Feb 8 10:52:21.504: INFO: Pod "pod-subpath-test-configmap-8wtb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.303120923s Feb 8 10:52:23.515: INFO: Pod "pod-subpath-test-configmap-8wtb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.314075025s Feb 8 10:52:25.534: INFO: Pod "pod-subpath-test-configmap-8wtb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.332689753s Feb 8 10:52:27.549: INFO: Pod "pod-subpath-test-configmap-8wtb": Phase="Running", Reason="", readiness=false. Elapsed: 16.348017085s Feb 8 10:52:29.565: INFO: Pod "pod-subpath-test-configmap-8wtb": Phase="Running", Reason="", readiness=false. Elapsed: 18.364084158s Feb 8 10:52:31.575: INFO: Pod "pod-subpath-test-configmap-8wtb": Phase="Running", Reason="", readiness=false. Elapsed: 20.37393002s Feb 8 10:52:33.588: INFO: Pod "pod-subpath-test-configmap-8wtb": Phase="Running", Reason="", readiness=false. Elapsed: 22.387070567s Feb 8 10:52:35.613: INFO: Pod "pod-subpath-test-configmap-8wtb": Phase="Running", Reason="", readiness=false. Elapsed: 24.411974334s Feb 8 10:52:37.627: INFO: Pod "pod-subpath-test-configmap-8wtb": Phase="Running", Reason="", readiness=false. Elapsed: 26.426057437s Feb 8 10:52:39.644: INFO: Pod "pod-subpath-test-configmap-8wtb": Phase="Running", Reason="", readiness=false. Elapsed: 28.442738484s Feb 8 10:52:41.682: INFO: Pod "pod-subpath-test-configmap-8wtb": Phase="Running", Reason="", readiness=false. Elapsed: 30.481084024s Feb 8 10:52:43.693: INFO: Pod "pod-subpath-test-configmap-8wtb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.491847639s STEP: Saw pod success Feb 8 10:52:43.693: INFO: Pod "pod-subpath-test-configmap-8wtb" satisfied condition "success or failure" Feb 8 10:52:43.700: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-8wtb container test-container-subpath-configmap-8wtb: STEP: delete the pod Feb 8 10:52:43.919: INFO: Waiting for pod pod-subpath-test-configmap-8wtb to disappear Feb 8 10:52:44.000: INFO: Pod pod-subpath-test-configmap-8wtb no longer exists STEP: Deleting pod pod-subpath-test-configmap-8wtb Feb 8 10:52:44.000: INFO: Deleting pod "pod-subpath-test-configmap-8wtb" in namespace "e2e-tests-subpath-sdhkd" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:52:44.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-sdhkd" for this suite. Feb 8 10:52:50.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:52:50.335: INFO: namespace: e2e-tests-subpath-sdhkd, resource: bindings, ignored listing per whitelist Feb 8 10:52:50.488: INFO: namespace e2e-tests-subpath-sdhkd deletion completed in 6.443004573s • [SLOW TEST:39.507 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:52:50.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Feb 8 10:52:50.797: INFO: Waiting up to 5m0s for pod "client-containers-26948ae7-4a61-11ea-95d6-0242ac110005" in namespace "e2e-tests-containers-czm58" to be "success or failure" Feb 8 10:52:50.886: INFO: Pod "client-containers-26948ae7-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 88.95908ms Feb 8 10:52:52.902: INFO: Pod "client-containers-26948ae7-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105028346s Feb 8 10:52:54.924: INFO: Pod "client-containers-26948ae7-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126558239s Feb 8 10:52:56.947: INFO: Pod "client-containers-26948ae7-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149965792s Feb 8 10:52:59.666: INFO: Pod "client-containers-26948ae7-4a61-11ea-95d6-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.868950692s Feb 8 10:53:01.689: INFO: Pod "client-containers-26948ae7-4a61-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.89205818s STEP: Saw pod success Feb 8 10:53:01.690: INFO: Pod "client-containers-26948ae7-4a61-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 10:53:01.697: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-26948ae7-4a61-11ea-95d6-0242ac110005 container test-container: STEP: delete the pod Feb 8 10:53:02.150: INFO: Waiting for pod client-containers-26948ae7-4a61-11ea-95d6-0242ac110005 to disappear Feb 8 10:53:02.171: INFO: Pod client-containers-26948ae7-4a61-11ea-95d6-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:53:02.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-czm58" for this suite. Feb 8 10:53:08.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:53:08.424: INFO: namespace: e2e-tests-containers-czm58, resource: bindings, ignored listing per whitelist Feb 8 10:53:08.463: INFO: namespace e2e-tests-containers-czm58 deletion completed in 6.280008096s • [SLOW TEST:17.975 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:53:08.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Feb 8 10:53:08.741: INFO: Waiting up to 5m0s for pod "pod-3145e9c4-4a61-11ea-95d6-0242ac110005" in namespace "e2e-tests-emptydir-rvljw" to be "success or failure" Feb 8 10:53:08.752: INFO: Pod "pod-3145e9c4-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.61875ms Feb 8 10:53:10.801: INFO: Pod "pod-3145e9c4-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059842121s Feb 8 10:53:13.376: INFO: Pod "pod-3145e9c4-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.635300011s Feb 8 10:53:15.391: INFO: Pod "pod-3145e9c4-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.6498437s Feb 8 10:53:17.403: INFO: Pod "pod-3145e9c4-4a61-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.662045268s STEP: Saw pod success Feb 8 10:53:17.403: INFO: Pod "pod-3145e9c4-4a61-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 10:53:17.414: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3145e9c4-4a61-11ea-95d6-0242ac110005 container test-container: STEP: delete the pod Feb 8 10:53:17.547: INFO: Waiting for pod pod-3145e9c4-4a61-11ea-95d6-0242ac110005 to disappear Feb 8 10:53:18.427: INFO: Pod pod-3145e9c4-4a61-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:53:18.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-rvljw" for this suite. Feb 8 10:53:24.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:53:24.693: INFO: namespace: e2e-tests-emptydir-rvljw, resource: bindings, ignored listing per whitelist Feb 8 10:53:24.783: INFO: namespace e2e-tests-emptydir-rvljw deletion completed in 6.331276876s • [SLOW TEST:16.320 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:53:24.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-8rmkl STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-8rmkl STEP: Deleting pre-stop pod Feb 8 10:53:48.211: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:53:48.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-8rmkl" for this suite. Feb 8 10:54:28.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:54:28.388: INFO: namespace: e2e-tests-prestop-8rmkl, resource: bindings, ignored listing per whitelist Feb 8 10:54:28.623: INFO: namespace e2e-tests-prestop-8rmkl deletion completed in 40.341942397s • [SLOW TEST:63.839 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:54:28.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Feb 8 10:54:29.421: INFO: Waiting up to 5m0s for pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-j6nzw" in namespace "e2e-tests-svcaccounts-4w4m5" to be "success or failure" Feb 8 10:54:29.450: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-j6nzw": Phase="Pending", Reason="", readiness=false. Elapsed: 28.143489ms Feb 8 10:54:31.469: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-j6nzw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047821824s Feb 8 10:54:33.504: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-j6nzw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081894927s Feb 8 10:54:35.571: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-j6nzw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149115787s Feb 8 10:54:37.593: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-j6nzw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.17118611s Feb 8 10:54:39.638: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-j6nzw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.216672575s Feb 8 10:54:41.693: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-j6nzw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.270986919s Feb 8 10:54:43.749: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-j6nzw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.327781655s Feb 8 10:54:46.189: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-j6nzw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.767551765s STEP: Saw pod success Feb 8 10:54:46.189: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-j6nzw" satisfied condition "success or failure" Feb 8 10:54:46.206: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-j6nzw container token-test: STEP: delete the pod Feb 8 10:54:46.703: INFO: Waiting for pod pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-j6nzw to disappear Feb 8 10:54:46.721: INFO: Pod pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-j6nzw no longer exists STEP: Creating a pod to test consume service account root CA Feb 8 10:54:46.903: INFO: Waiting up to 5m0s for pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-pxck5" in namespace "e2e-tests-svcaccounts-4w4m5" to be "success or failure" Feb 8 10:54:46.945: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-pxck5": Phase="Pending", Reason="", readiness=false. Elapsed: 41.1226ms Feb 8 10:54:48.965: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-pxck5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061385866s Feb 8 10:54:50.988: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-pxck5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084964062s Feb 8 10:54:53.135: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-pxck5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.232039979s Feb 8 10:54:55.148: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-pxck5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.244493803s Feb 8 10:54:57.170: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-pxck5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.266989174s Feb 8 10:54:59.182: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-pxck5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.278707398s Feb 8 10:55:01.205: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-pxck5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.301769354s Feb 8 10:55:03.216: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-pxck5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.313064232s STEP: Saw pod success Feb 8 10:55:03.217: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-pxck5" satisfied condition "success or failure" Feb 8 10:55:03.220: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-pxck5 container root-ca-test: STEP: delete the pod Feb 8 10:55:03.368: INFO: Waiting for pod pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-pxck5 to disappear Feb 8 10:55:03.375: INFO: Pod pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-pxck5 no longer exists STEP: Creating a pod to test consume service account namespace Feb 8 10:55:03.397: INFO: Waiting up to 5m0s for pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-cwtzn" in namespace "e2e-tests-svcaccounts-4w4m5" to be "success or failure" Feb 8 10:55:03.436: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-cwtzn": Phase="Pending", Reason="", readiness=false. Elapsed: 39.882946ms Feb 8 10:55:05.472: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-cwtzn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075354468s Feb 8 10:55:07.566: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-cwtzn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168935695s Feb 8 10:55:11.548: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-cwtzn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.151237036s Feb 8 10:55:13.583: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-cwtzn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.186653508s Feb 8 10:55:15.593: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-cwtzn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.196876667s Feb 8 10:55:18.196: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-cwtzn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.79931032s Feb 8 10:55:20.207: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-cwtzn": Phase="Running", Reason="", readiness=false. Elapsed: 16.810077162s Feb 8 10:55:23.391: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-cwtzn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.9939899s STEP: Saw pod success Feb 8 10:55:23.391: INFO: Pod "pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-cwtzn" satisfied condition "success or failure" Feb 8 10:55:23.431: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-cwtzn container namespace-test: STEP: delete the pod Feb 8 10:55:23.857: INFO: Waiting for pod pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-cwtzn to disappear Feb 8 10:55:23.885: INFO: Pod pod-service-account-615ca599-4a61-11ea-95d6-0242ac110005-cwtzn no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:55:23.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-4w4m5" for this suite. Feb 8 10:55:31.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:55:32.073: INFO: namespace: e2e-tests-svcaccounts-4w4m5, resource: bindings, ignored listing per whitelist Feb 8 10:55:32.075: INFO: namespace e2e-tests-svcaccounts-4w4m5 deletion completed in 8.178434372s • [SLOW TEST:63.452 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:55:32.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-86d32c36-4a61-11ea-95d6-0242ac110005 STEP: Creating a pod to test consume secrets Feb 8 10:55:32.264: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-86d3d421-4a61-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-fxggv" to be "success or failure" Feb 8 10:55:32.270: INFO: Pod "pod-projected-secrets-86d3d421-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.559498ms Feb 8 10:55:34.285: INFO: Pod "pod-projected-secrets-86d3d421-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02066899s Feb 8 10:55:36.306: INFO: Pod "pod-projected-secrets-86d3d421-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042255681s Feb 8 10:55:38.332: INFO: Pod "pod-projected-secrets-86d3d421-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067911312s Feb 8 10:55:40.360: INFO: Pod "pod-projected-secrets-86d3d421-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095744806s Feb 8 10:55:42.382: INFO: Pod "pod-projected-secrets-86d3d421-4a61-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118119686s STEP: Saw pod success Feb 8 10:55:42.382: INFO: Pod "pod-projected-secrets-86d3d421-4a61-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 10:55:42.394: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-86d3d421-4a61-11ea-95d6-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Feb 8 10:55:42.682: INFO: Waiting for pod pod-projected-secrets-86d3d421-4a61-11ea-95d6-0242ac110005 to disappear Feb 8 10:55:42.689: INFO: Pod pod-projected-secrets-86d3d421-4a61-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:55:42.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fxggv" for this suite. Feb 8 10:55:48.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:55:48.999: INFO: namespace: e2e-tests-projected-fxggv, resource: bindings, ignored listing per whitelist Feb 8 10:55:48.999: INFO: namespace e2e-tests-projected-fxggv deletion completed in 6.303960872s • [SLOW TEST:16.925 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:55:49.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 8 10:55:49.244: INFO: Waiting up to 5m0s for pod "pod-90f1a1ce-4a61-11ea-95d6-0242ac110005" in namespace "e2e-tests-emptydir-lkvmv" to be "success or failure" Feb 8 10:55:49.443: INFO: Pod "pod-90f1a1ce-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 198.163179ms Feb 8 10:55:51.455: INFO: Pod "pod-90f1a1ce-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210056447s Feb 8 10:55:53.484: INFO: Pod "pod-90f1a1ce-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.239345002s Feb 8 10:55:57.473: INFO: Pod "pod-90f1a1ce-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.228875568s Feb 8 10:55:59.517: INFO: Pod "pod-90f1a1ce-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.27240901s Feb 8 10:56:01.555: INFO: Pod "pod-90f1a1ce-4a61-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.309998175s STEP: Saw pod success Feb 8 10:56:01.555: INFO: Pod "pod-90f1a1ce-4a61-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 10:56:01.574: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-90f1a1ce-4a61-11ea-95d6-0242ac110005 container test-container: STEP: delete the pod Feb 8 10:56:01.761: INFO: Waiting for pod pod-90f1a1ce-4a61-11ea-95d6-0242ac110005 to disappear Feb 8 10:56:01.825: INFO: Pod pod-90f1a1ce-4a61-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:56:01.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lkvmv" for this suite. Feb 8 10:56:07.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:56:07.949: INFO: namespace: e2e-tests-emptydir-lkvmv, resource: bindings, ignored listing per whitelist Feb 8 10:56:08.032: INFO: namespace e2e-tests-emptydir-lkvmv deletion completed in 6.194371136s • [SLOW TEST:19.032 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:56:08.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 8 10:56:08.315: INFO: Creating deployment "nginx-deployment" Feb 8 10:56:08.333: INFO: Waiting for observed generation 1 Feb 8 10:56:10.874: INFO: Waiting for all required pods to come up Feb 8 10:56:11.949: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 8 10:56:46.017: INFO: Waiting for deployment "nginx-deployment" to complete Feb 8 10:56:46.027: INFO: Updating deployment "nginx-deployment" with a non-existent image Feb 8 10:56:46.047: INFO: Updating deployment nginx-deployment Feb 8 10:56:46.047: INFO: Waiting for observed generation 2 Feb 8 10:56:48.918: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 8 10:56:48.930: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 8 10:56:48.938: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 8 10:56:49.723: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 8 10:56:49.723: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 8 10:56:49.816: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 8 10:56:50.012: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Feb 8 10:56:50.012: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Feb 8 10:56:50.058: INFO: Updating deployment nginx-deployment Feb 8 10:56:50.058: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Feb 8 10:56:50.894: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 8 10:56:54.168: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 8 10:56:54.847: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-wft57,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wft57/deployments/nginx-deployment,UID:9c531b83-4a61-11ea-a994-fa163e34d433,ResourceVersion:20965012,Generation:3,CreationTimestamp:2020-02-08 10:56:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-02-08 10:56:50 +0000 UTC 2020-02-08 10:56:50 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-08 10:56:54 +0000 UTC 2020-02-08 10:56:08 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Feb 8 10:56:54.872: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-wft57,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wft57/replicasets/nginx-deployment-5c98f8fb5,UID:b2d0a669-4a61-11ea-a994-fa163e34d433,ResourceVersion:20965008,Generation:3,CreationTimestamp:2020-02-08 10:56:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 9c531b83-4a61-11ea-a994-fa163e34d433 0xc002369797 0xc002369798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 8 10:56:54.872: INFO: All old ReplicaSets of Deployment "nginx-deployment": Feb 8 10:56:54.873: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-wft57,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wft57/replicasets/nginx-deployment-85ddf47c5d,UID:9c574ef3-4a61-11ea-a994-fa163e34d433,ResourceVersion:20965000,Generation:3,CreationTimestamp:2020-02-08 10:56:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 9c531b83-4a61-11ea-a994-fa163e34d433 0xc0023698d7 0xc0023698d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Feb 8 10:56:56.310: INFO: Pod "nginx-deployment-5c98f8fb5-67pdw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-67pdw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-5c98f8fb5-67pdw,UID:b2fecdec-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964941,Generation:0,CreationTimestamp:2020-02-08 10:56:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b2d0a669-4a61-11ea-a994-fa163e34d433 0xc00199e767 0xc00199e768}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199e7d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199e7f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-08 10:56:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.310: INFO: Pod "nginx-deployment-5c98f8fb5-6wf76" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6wf76,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-5c98f8fb5-6wf76,UID:b645d98f-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964992,Generation:0,CreationTimestamp:2020-02-08 10:56:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b2d0a669-4a61-11ea-a994-fa163e34d433 0xc00199e8d7 0xc00199e8d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199e940} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199e960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.311: INFO: Pod "nginx-deployment-5c98f8fb5-bdsjd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bdsjd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-5c98f8fb5-bdsjd,UID:b60d738b-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964971,Generation:0,CreationTimestamp:2020-02-08 10:56:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b2d0a669-4a61-11ea-a994-fa163e34d433 0xc00199e9d7 0xc00199e9d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199ea40} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199ea60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.311: INFO: Pod "nginx-deployment-5c98f8fb5-cd7mx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cd7mx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-5c98f8fb5-cd7mx,UID:b5b2201f-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964964,Generation:0,CreationTimestamp:2020-02-08 10:56:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b2d0a669-4a61-11ea-a994-fa163e34d433 0xc00199ead7 0xc00199ead8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199eb40} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199eb60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.311: INFO: Pod "nginx-deployment-5c98f8fb5-cjs6t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cjs6t,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-5c98f8fb5-cjs6t,UID:b60d7f37-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964970,Generation:0,CreationTimestamp:2020-02-08 10:56:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b2d0a669-4a61-11ea-a994-fa163e34d433 0xc00199ebe7 0xc00199ebe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199ec50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199ec70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.311: INFO: Pod "nginx-deployment-5c98f8fb5-cldsv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cldsv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-5c98f8fb5-cldsv,UID:b2d81023-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964937,Generation:0,CreationTimestamp:2020-02-08 10:56:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b2d0a669-4a61-11ea-a994-fa163e34d433 0xc00199ece7 0xc00199ece8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199ed50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199ed70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-08 10:56:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.311: INFO: Pod "nginx-deployment-5c98f8fb5-f4hhl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-f4hhl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-5c98f8fb5-f4hhl,UID:b6e873f0-4a61-11ea-a994-fa163e34d433,ResourceVersion:20965002,Generation:0,CreationTimestamp:2020-02-08 10:56:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b2d0a669-4a61-11ea-a994-fa163e34d433 0xc00199ee37 0xc00199ee38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199eea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199eec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.312: INFO: Pod "nginx-deployment-5c98f8fb5-fvxtq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fvxtq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-5c98f8fb5-fvxtq,UID:b645a551-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964985,Generation:0,CreationTimestamp:2020-02-08 10:56:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b2d0a669-4a61-11ea-a994-fa163e34d433 0xc00199ef37 0xc00199ef38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199efa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199efc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.312: INFO: Pod "nginx-deployment-5c98f8fb5-h4shd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-h4shd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-5c98f8fb5-h4shd,UID:b30d71f4-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964942,Generation:0,CreationTimestamp:2020-02-08 10:56:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b2d0a669-4a61-11ea-a994-fa163e34d433 0xc00199f037 0xc00199f038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199f0a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199f0c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-08 10:56:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.312: INFO: Pod "nginx-deployment-5c98f8fb5-lg9hp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lg9hp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-5c98f8fb5-lg9hp,UID:b2d48622-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964922,Generation:0,CreationTimestamp:2020-02-08 10:56:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b2d0a669-4a61-11ea-a994-fa163e34d433 0xc00199f187 0xc00199f188}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199f1f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199f210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-08 10:56:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.312: INFO: Pod "nginx-deployment-5c98f8fb5-nwhdj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nwhdj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-5c98f8fb5-nwhdj,UID:b644e3eb-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964988,Generation:0,CreationTimestamp:2020-02-08 10:56:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b2d0a669-4a61-11ea-a994-fa163e34d433 0xc00199f2d7 0xc00199f2d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199f340} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199f360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.312: INFO: Pod "nginx-deployment-5c98f8fb5-qwrjg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qwrjg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-5c98f8fb5-qwrjg,UID:b6461351-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964991,Generation:0,CreationTimestamp:2020-02-08 10:56:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b2d0a669-4a61-11ea-a994-fa163e34d433 0xc00199f3d7 0xc00199f3d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199f440} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199f460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.313: INFO: Pod "nginx-deployment-5c98f8fb5-wwxdv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wwxdv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-5c98f8fb5-wwxdv,UID:b2d89439-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964938,Generation:0,CreationTimestamp:2020-02-08 10:56:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 b2d0a669-4a61-11ea-a994-fa163e34d433 0xc00199f4d7 0xc00199f4d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199f540} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199f560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-08 10:56:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.313: INFO: Pod "nginx-deployment-85ddf47c5d-58k9l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-58k9l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-58k9l,UID:b60ca5a9-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964976,Generation:0,CreationTimestamp:2020-02-08 10:56:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc00199f627 0xc00199f628}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199f690} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199f6b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.313: INFO: Pod "nginx-deployment-85ddf47c5d-7jn8l" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7jn8l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-7jn8l,UID:9c9501ac-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964856,Generation:0,CreationTimestamp:2020-02-08 10:56:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc00199f727 0xc00199f728}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199f790} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199f7b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-02-08 10:56:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-08 10:56:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://30d5db11601f3ab1baa23fb6b8828a1bef4d2a58f214e7580da746faed857ae8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.313: INFO: Pod "nginx-deployment-85ddf47c5d-7q9p6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7q9p6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-7q9p6,UID:b64bbf68-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964994,Generation:0,CreationTimestamp:2020-02-08 10:56:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc00199f877 0xc00199f878}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199f8e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199f900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.313: INFO: Pod "nginx-deployment-85ddf47c5d-9dq2m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9dq2m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-9dq2m,UID:b60d4eff-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964977,Generation:0,CreationTimestamp:2020-02-08 10:56:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc00199f977 0xc00199f978}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199f9e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199fa00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.314: INFO: Pod "nginx-deployment-85ddf47c5d-b5b6n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b5b6n,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-b5b6n,UID:b60d2261-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964967,Generation:0,CreationTimestamp:2020-02-08 10:56:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc00199fa77 0xc00199fa78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199fae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199fb00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.314: INFO: Pod "nginx-deployment-85ddf47c5d-b6wqg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b6wqg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-b6wqg,UID:b64b909b-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964989,Generation:0,CreationTimestamp:2020-02-08 10:56:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc00199fb77 0xc00199fb78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199fbe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199fc00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.314: INFO: Pod "nginx-deployment-85ddf47c5d-bqssc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bqssc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-bqssc,UID:b64ef9d3-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964993,Generation:0,CreationTimestamp:2020-02-08 10:56:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc00199fc77 0xc00199fc78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199fce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199fd00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.314: INFO: Pod "nginx-deployment-85ddf47c5d-c5jcm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-c5jcm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-c5jcm,UID:b64bcd14-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964997,Generation:0,CreationTimestamp:2020-02-08 10:56:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc00199fd77 0xc00199fd78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199fde0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199fe00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.314: INFO: Pod "nginx-deployment-85ddf47c5d-csdl4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-csdl4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-csdl4,UID:b5a95f7d-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964999,Generation:0,CreationTimestamp:2020-02-08 10:56:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc00199fe77 0xc00199fe78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00199fee0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00199ff00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:50 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-08 10:56:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.315: INFO: Pod "nginx-deployment-85ddf47c5d-dl262" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dl262,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-dl262,UID:9c7f39e9-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964878,Generation:0,CreationTimestamp:2020-02-08 10:56:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc001bac047 0xc001bac048}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bac0b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bac0d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-02-08 10:56:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-08 10:56:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://628c99440ad6cd017ac7534b199ff23978a63a69b354ecd8a42c9e48607a4baa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.315: INFO: Pod "nginx-deployment-85ddf47c5d-dqdx6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dqdx6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-dqdx6,UID:9c9a55e2-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964828,Generation:0,CreationTimestamp:2020-02-08 10:56:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc001bac197 0xc001bac198}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bac200} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bac220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-08 10:56:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-08 10:56:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://214219772da3bc496bcb91fb965db0094268497fb8716d02efd99f32d1269d81}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.315: INFO: Pod "nginx-deployment-85ddf47c5d-lwrps" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lwrps,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-lwrps,UID:9c6df633-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964874,Generation:0,CreationTimestamp:2020-02-08 10:56:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc001bac2e7 0xc001bac2e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bac350} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bac370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-02-08 10:56:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-08 10:56:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b8f38e8b2868284226666dbd95b7079edd67782d0034ae5b0e0f36403c35fcb1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.315: INFO: Pod "nginx-deployment-85ddf47c5d-lxghz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lxghz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-lxghz,UID:b64bb794-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964987,Generation:0,CreationTimestamp:2020-02-08 10:56:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc001bac437 0xc001bac438}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bac4a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bac4c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.315: INFO: Pod "nginx-deployment-85ddf47c5d-rr4ht" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rr4ht,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-rr4ht,UID:b5b6d3c5-4a61-11ea-a994-fa163e34d433,ResourceVersion:20965017,Generation:0,CreationTimestamp:2020-02-08 10:56:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc001bac537 0xc001bac538}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bac5b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bac630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:51 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-08 10:56:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.315: INFO: Pod "nginx-deployment-85ddf47c5d-slf25" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-slf25,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-slf25,UID:b5b72ee2-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964961,Generation:0,CreationTimestamp:2020-02-08 10:56:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc001bac6e7 0xc001bac6e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bac780} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bac7a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.316: INFO: Pod "nginx-deployment-85ddf47c5d-thsjg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-thsjg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-thsjg,UID:9c9a43da-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964859,Generation:0,CreationTimestamp:2020-02-08 10:56:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc001bac817 0xc001bac818}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bac880} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bac8a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-02-08 10:56:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-08 10:56:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9a9de540339131f938c010b7d4319cac925678fa360a2d352c9eb88826335232}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.316: INFO: Pod "nginx-deployment-85ddf47c5d-v7psh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-v7psh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-v7psh,UID:9c99fce4-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964862,Generation:0,CreationTimestamp:2020-02-08 10:56:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc001bac9d7 0xc001bac9d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001baca40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001baca60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-02-08 10:56:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-08 10:56:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://779cd11e9cd026fc57d97b49d19c29647cef18a455a6f6a0fceeb94a057dbe99}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.316: INFO: Pod "nginx-deployment-85ddf47c5d-w672f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-w672f,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-w672f,UID:b60cfb91-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964973,Generation:0,CreationTimestamp:2020-02-08 10:56:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc001bacbb7 0xc001bacbb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bacc20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bacc40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.316: INFO: Pod "nginx-deployment-85ddf47c5d-wtngn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wtngn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-wtngn,UID:9c9518ad-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964865,Generation:0,CreationTimestamp:2020-02-08 10:56:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc001baccb7 0xc001baccb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bacd90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bacdb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-08 10:56:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-08 10:56:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f9596cfeabbf45e2d9488fc51da87bbc9f754d7a76eda413c670840f8515f033}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 8 10:56:56.316: INFO: Pod "nginx-deployment-85ddf47c5d-zxn24" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zxn24,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-wft57,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wft57/pods/nginx-deployment-85ddf47c5d-zxn24,UID:9c7ef249-4a61-11ea-a994-fa163e34d433,ResourceVersion:20964870,Generation:0,CreationTimestamp:2020-02-08 10:56:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9c574ef3-4a61-11ea-a994-fa163e34d433 0xc001bace77 0xc001bace78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8pqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8pqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w8pqx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bad020} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bad040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 10:56:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-02-08 10:56:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-08 10:56:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e93ec30c544bcda3330425df39ca26e649434b4857f5e49440317ec0d1e62134}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:56:56.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-wft57" for this suite. Feb 8 10:57:39.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:57:39.406: INFO: namespace: e2e-tests-deployment-wft57, resource: bindings, ignored listing per whitelist Feb 8 10:57:39.454: INFO: namespace e2e-tests-deployment-wft57 deletion completed in 41.766136205s • [SLOW TEST:91.422 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:57:39.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0208 10:57:52.301236 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 8 10:57:52.301: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:57:52.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-bcn2w" for this suite. Feb 8 10:58:04.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:58:04.547: INFO: namespace: e2e-tests-gc-bcn2w, resource: bindings, ignored listing per whitelist Feb 8 10:58:04.706: INFO: namespace e2e-tests-gc-bcn2w deletion completed in 12.399194015s • [SLOW TEST:25.252 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:58:04.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-f7msg/configmap-test-e1f514a8-4a61-11ea-95d6-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 8 10:58:05.198: INFO: Waiting up to 5m0s for pod "pod-configmaps-e1fabb7e-4a61-11ea-95d6-0242ac110005" in namespace "e2e-tests-configmap-f7msg" to be "success or failure" Feb 8 10:58:05.210: INFO: Pod "pod-configmaps-e1fabb7e-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.726116ms Feb 8 10:58:07.227: INFO: Pod "pod-configmaps-e1fabb7e-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029005959s Feb 8 10:58:09.237: INFO: Pod "pod-configmaps-e1fabb7e-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039417584s Feb 8 10:58:11.249: INFO: Pod "pod-configmaps-e1fabb7e-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050811777s Feb 8 10:58:13.471: INFO: Pod "pod-configmaps-e1fabb7e-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.273577451s Feb 8 10:58:15.949: INFO: Pod "pod-configmaps-e1fabb7e-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.751278675s Feb 8 10:58:17.958: INFO: Pod "pod-configmaps-e1fabb7e-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.760324474s Feb 8 10:58:19.981: INFO: Pod "pod-configmaps-e1fabb7e-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.783167917s Feb 8 10:58:21.997: INFO: Pod "pod-configmaps-e1fabb7e-4a61-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.799260679s STEP: Saw pod success Feb 8 10:58:21.997: INFO: Pod "pod-configmaps-e1fabb7e-4a61-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 10:58:22.003: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-e1fabb7e-4a61-11ea-95d6-0242ac110005 container env-test: STEP: delete the pod Feb 8 10:58:22.640: INFO: Waiting for pod pod-configmaps-e1fabb7e-4a61-11ea-95d6-0242ac110005 to disappear Feb 8 10:58:22.661: INFO: Pod pod-configmaps-e1fabb7e-4a61-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:58:22.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-f7msg" for this suite. Feb 8 10:58:28.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:58:28.869: INFO: namespace: e2e-tests-configmap-f7msg, resource: bindings, ignored listing per whitelist Feb 8 10:58:28.938: INFO: namespace e2e-tests-configmap-f7msg deletion completed in 6.270972955s • [SLOW TEST:24.231 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:58:28.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 8 10:58:29.121: INFO: Waiting up to 5m0s for pod "pod-f03e280d-4a61-11ea-95d6-0242ac110005" in namespace "e2e-tests-emptydir-shvwk" to be "success or failure" Feb 8 10:58:29.185: INFO: Pod "pod-f03e280d-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 64.494416ms Feb 8 10:58:31.198: INFO: Pod "pod-f03e280d-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077465269s Feb 8 10:58:33.214: INFO: Pod "pod-f03e280d-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092816054s Feb 8 10:58:35.311: INFO: Pod "pod-f03e280d-4a61-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.189684493s Feb 8 10:58:37.331: INFO: Pod "pod-f03e280d-4a61-11ea-95d6-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.210246447s Feb 8 10:58:39.340: INFO: Pod "pod-f03e280d-4a61-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.219085152s STEP: Saw pod success Feb 8 10:58:39.340: INFO: Pod "pod-f03e280d-4a61-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 10:58:39.343: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f03e280d-4a61-11ea-95d6-0242ac110005 container test-container: STEP: delete the pod Feb 8 10:58:39.457: INFO: Waiting for pod pod-f03e280d-4a61-11ea-95d6-0242ac110005 to disappear Feb 8 10:58:39.480: INFO: Pod pod-f03e280d-4a61-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:58:39.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-shvwk" for this suite. Feb 8 10:58:45.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:58:45.573: INFO: namespace: e2e-tests-emptydir-shvwk, resource: bindings, ignored listing per whitelist Feb 8 10:58:45.742: INFO: namespace e2e-tests-emptydir-shvwk deletion completed in 6.253643565s • [SLOW TEST:16.804 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:58:45.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Feb 8 10:58:45.983: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-46bxx,SelfLink:/api/v1/namespaces/e2e-tests-watch-46bxx/configmaps/e2e-watch-test-configmap-a,UID:fa4be4aa-4a61-11ea-a994-fa163e34d433,ResourceVersion:20965397,Generation:0,CreationTimestamp:2020-02-08 10:58:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 8 10:58:45.983: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-46bxx,SelfLink:/api/v1/namespaces/e2e-tests-watch-46bxx/configmaps/e2e-watch-test-configmap-a,UID:fa4be4aa-4a61-11ea-a994-fa163e34d433,ResourceVersion:20965397,Generation:0,CreationTimestamp:2020-02-08 10:58:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Feb 8 10:58:56.017: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-46bxx,SelfLink:/api/v1/namespaces/e2e-tests-watch-46bxx/configmaps/e2e-watch-test-configmap-a,UID:fa4be4aa-4a61-11ea-a994-fa163e34d433,ResourceVersion:20965411,Generation:0,CreationTimestamp:2020-02-08 10:58:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 8 10:58:56.017: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-46bxx,SelfLink:/api/v1/namespaces/e2e-tests-watch-46bxx/configmaps/e2e-watch-test-configmap-a,UID:fa4be4aa-4a61-11ea-a994-fa163e34d433,ResourceVersion:20965411,Generation:0,CreationTimestamp:2020-02-08 10:58:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Feb 8 10:59:06.123: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-46bxx,SelfLink:/api/v1/namespaces/e2e-tests-watch-46bxx/configmaps/e2e-watch-test-configmap-a,UID:fa4be4aa-4a61-11ea-a994-fa163e34d433,ResourceVersion:20965424,Generation:0,CreationTimestamp:2020-02-08 10:58:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 8 10:59:06.124: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-46bxx,SelfLink:/api/v1/namespaces/e2e-tests-watch-46bxx/configmaps/e2e-watch-test-configmap-a,UID:fa4be4aa-4a61-11ea-a994-fa163e34d433,ResourceVersion:20965424,Generation:0,CreationTimestamp:2020-02-08 10:58:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Feb 8 10:59:16.151: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-46bxx,SelfLink:/api/v1/namespaces/e2e-tests-watch-46bxx/configmaps/e2e-watch-test-configmap-a,UID:fa4be4aa-4a61-11ea-a994-fa163e34d433,ResourceVersion:20965436,Generation:0,CreationTimestamp:2020-02-08 10:58:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 8 10:59:16.151: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-46bxx,SelfLink:/api/v1/namespaces/e2e-tests-watch-46bxx/configmaps/e2e-watch-test-configmap-a,UID:fa4be4aa-4a61-11ea-a994-fa163e34d433,ResourceVersion:20965436,Generation:0,CreationTimestamp:2020-02-08 10:58:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Feb 8 10:59:26.289: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-46bxx,SelfLink:/api/v1/namespaces/e2e-tests-watch-46bxx/configmaps/e2e-watch-test-configmap-b,UID:1241002a-4a62-11ea-a994-fa163e34d433,ResourceVersion:20965449,Generation:0,CreationTimestamp:2020-02-08 10:59:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 8 10:59:26.289: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-46bxx,SelfLink:/api/v1/namespaces/e2e-tests-watch-46bxx/configmaps/e2e-watch-test-configmap-b,UID:1241002a-4a62-11ea-a994-fa163e34d433,ResourceVersion:20965449,Generation:0,CreationTimestamp:2020-02-08 10:59:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Feb 8 10:59:36.342: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-46bxx,SelfLink:/api/v1/namespaces/e2e-tests-watch-46bxx/configmaps/e2e-watch-test-configmap-b,UID:1241002a-4a62-11ea-a994-fa163e34d433,ResourceVersion:20965462,Generation:0,CreationTimestamp:2020-02-08 10:59:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 8 10:59:36.343: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-46bxx,SelfLink:/api/v1/namespaces/e2e-tests-watch-46bxx/configmaps/e2e-watch-test-configmap-b,UID:1241002a-4a62-11ea-a994-fa163e34d433,ResourceVersion:20965462,Generation:0,CreationTimestamp:2020-02-08 10:59:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 10:59:46.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-46bxx" for this suite. Feb 8 10:59:52.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 10:59:52.695: INFO: namespace: e2e-tests-watch-46bxx, resource: bindings, ignored listing per whitelist Feb 8 10:59:52.736: INFO: namespace e2e-tests-watch-46bxx deletion completed in 6.376246259s • [SLOW TEST:66.994 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 10:59:52.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 8 10:59:52.996: INFO: Waiting up to 5m0s for pod "pod-22398e6e-4a62-11ea-95d6-0242ac110005" in namespace "e2e-tests-emptydir-vplz7" to be "success or failure" Feb 8 10:59:53.026: INFO: Pod "pod-22398e6e-4a62-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.098199ms Feb 8 10:59:55.312: INFO: Pod "pod-22398e6e-4a62-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315775213s Feb 8 10:59:57.328: INFO: Pod "pod-22398e6e-4a62-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332225219s Feb 8 10:59:59.346: INFO: Pod "pod-22398e6e-4a62-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.350251512s Feb 8 11:00:01.365: INFO: Pod "pod-22398e6e-4a62-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.369053473s Feb 8 11:00:03.397: INFO: Pod "pod-22398e6e-4a62-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.401208843s STEP: Saw pod success Feb 8 11:00:03.397: INFO: Pod "pod-22398e6e-4a62-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 11:00:03.421: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-22398e6e-4a62-11ea-95d6-0242ac110005 container test-container: STEP: delete the pod Feb 8 11:00:03.668: INFO: Waiting for pod pod-22398e6e-4a62-11ea-95d6-0242ac110005 to disappear Feb 8 11:00:03.676: INFO: Pod pod-22398e6e-4a62-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:00:03.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vplz7" for this suite. Feb 8 11:00:09.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:00:09.984: INFO: namespace: e2e-tests-emptydir-vplz7, resource: bindings, ignored listing per whitelist Feb 8 11:00:10.267: INFO: namespace e2e-tests-emptydir-vplz7 deletion completed in 6.579108009s • [SLOW TEST:17.531 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:00:10.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 8 11:00:10.735: INFO: Creating ReplicaSet my-hostname-basic-2cd0c4fc-4a62-11ea-95d6-0242ac110005 Feb 8 11:00:10.764: INFO: Pod name my-hostname-basic-2cd0c4fc-4a62-11ea-95d6-0242ac110005: Found 0 pods out of 1 Feb 8 11:00:15.870: INFO: Pod name my-hostname-basic-2cd0c4fc-4a62-11ea-95d6-0242ac110005: Found 1 pods out of 1 Feb 8 11:00:15.870: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-2cd0c4fc-4a62-11ea-95d6-0242ac110005" is running Feb 8 11:00:19.916: INFO: Pod "my-hostname-basic-2cd0c4fc-4a62-11ea-95d6-0242ac110005-cvznk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 11:00:11 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 11:00:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2cd0c4fc-4a62-11ea-95d6-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 11:00:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2cd0c4fc-4a62-11ea-95d6-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 11:00:10 +0000 UTC Reason: Message:}]) Feb 8 11:00:19.916: INFO: Trying to dial the pod Feb 8 11:00:25.018: INFO: Controller my-hostname-basic-2cd0c4fc-4a62-11ea-95d6-0242ac110005: Got expected result from replica 1 [my-hostname-basic-2cd0c4fc-4a62-11ea-95d6-0242ac110005-cvznk]: "my-hostname-basic-2cd0c4fc-4a62-11ea-95d6-0242ac110005-cvznk", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:00:25.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-fmsgd" for this suite. Feb 8 11:00:31.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:00:31.405: INFO: namespace: e2e-tests-replicaset-fmsgd, resource: bindings, ignored listing per whitelist Feb 8 11:00:31.464: INFO: namespace e2e-tests-replicaset-fmsgd deletion completed in 6.429050766s • [SLOW TEST:21.197 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:00:31.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-qqcwf STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 8 11:00:31.619: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 8 11:01:09.936: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-qqcwf PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 8 11:01:09.936: INFO: >>> kubeConfig: /root/.kube/config I0208 11:01:10.033660 8 log.go:172] (0xc00085c4d0) (0xc0023388c0) Create stream I0208 11:01:10.033737 8 log.go:172] (0xc00085c4d0) (0xc0023388c0) Stream added, broadcasting: 1 I0208 11:01:10.043782 8 log.go:172] (0xc00085c4d0) Reply frame received for 1 I0208 11:01:10.043834 8 log.go:172] (0xc00085c4d0) (0xc001919d60) Create stream I0208 11:01:10.043843 8 log.go:172] (0xc00085c4d0) (0xc001919d60) Stream added, broadcasting: 3 I0208 11:01:10.045711 8 log.go:172] (0xc00085c4d0) Reply frame received for 3 I0208 11:01:10.045758 8 log.go:172] (0xc00085c4d0) (0xc002338960) Create stream I0208 11:01:10.045783 8 log.go:172] (0xc00085c4d0) (0xc002338960) Stream added, broadcasting: 5 I0208 11:01:10.047601 8 log.go:172] (0xc00085c4d0) Reply frame received for 5 I0208 11:01:10.243172 8 log.go:172] (0xc00085c4d0) Data frame received for 3 I0208 11:01:10.243243 8 log.go:172] (0xc001919d60) (3) Data frame handling I0208 11:01:10.243266 8 log.go:172] (0xc001919d60) (3) Data frame sent I0208 11:01:10.369732 8 log.go:172] (0xc00085c4d0) Data frame received for 1 I0208 11:01:10.369883 8 log.go:172] (0xc0023388c0) (1) Data frame handling I0208 11:01:10.369905 8 log.go:172] (0xc0023388c0) (1) Data frame sent I0208 11:01:10.369940 8 log.go:172] (0xc00085c4d0) (0xc0023388c0) Stream removed, broadcasting: 1 I0208 11:01:10.370151 8 log.go:172] (0xc00085c4d0) (0xc001919d60) Stream removed, broadcasting: 3 I0208 11:01:10.371003 8 log.go:172] (0xc00085c4d0) (0xc002338960) Stream removed, broadcasting: 5 I0208 11:01:10.371047 8 log.go:172] (0xc00085c4d0) (0xc0023388c0) Stream removed, broadcasting: 1 I0208 11:01:10.371057 8 log.go:172] (0xc00085c4d0) (0xc001919d60) Stream removed, broadcasting: 3 I0208 11:01:10.371065 8 log.go:172] (0xc00085c4d0) (0xc002338960) Stream removed, broadcasting: 5 I0208 11:01:10.371259 8 log.go:172] (0xc00085c4d0) Go away received Feb 8 11:01:10.371: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:01:10.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-qqcwf" for this suite. Feb 8 11:01:34.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:01:34.665: INFO: namespace: e2e-tests-pod-network-test-qqcwf, resource: bindings, ignored listing per whitelist Feb 8 11:01:34.701: INFO: namespace e2e-tests-pod-network-test-qqcwf deletion completed in 24.309636677s • [SLOW TEST:63.236 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:01:34.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 8 11:01:34.801: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 8 11:01:34.877: INFO: Waiting for terminating namespaces to be deleted... Feb 8 11:01:34.882: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 8 11:01:34.899: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 8 11:01:34.899: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 8 11:01:34.900: INFO: Container coredns ready: true, restart count 0 Feb 8 11:01:34.900: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 8 11:01:34.900: INFO: Container kube-proxy ready: true, restart count 0 Feb 8 11:01:34.900: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 8 11:01:34.900: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 8 11:01:34.900: INFO: Container weave ready: true, restart count 0 Feb 8 11:01:34.900: INFO: Container weave-npc ready: true, restart count 0 Feb 8 11:01:34.900: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 8 11:01:34.900: INFO: Container coredns ready: true, restart count 0 Feb 8 11:01:34.900: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 8 11:01:34.900: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f1685d7e866199], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:01:35.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-z5rrw" for this suite. Feb 8 11:01:41.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:01:42.146: INFO: namespace: e2e-tests-sched-pred-z5rrw, resource: bindings, ignored listing per whitelist Feb 8 11:01:42.159: INFO: namespace e2e-tests-sched-pred-z5rrw deletion completed in 6.199882145s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.458 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:01:42.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Feb 8 11:01:52.616: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:02:17.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-689rd" for this suite. Feb 8 11:02:24.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:02:24.125: INFO: namespace: e2e-tests-namespaces-689rd, resource: bindings, ignored listing per whitelist Feb 8 11:02:24.176: INFO: namespace e2e-tests-namespaces-689rd deletion completed in 6.195163319s STEP: Destroying namespace "e2e-tests-nsdeletetest-vbw2p" for this suite. Feb 8 11:02:24.179: INFO: Namespace e2e-tests-nsdeletetest-vbw2p was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-88g64" for this suite. Feb 8 11:02:30.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:02:30.280: INFO: namespace: e2e-tests-nsdeletetest-88g64, resource: bindings, ignored listing per whitelist Feb 8 11:02:30.399: INFO: namespace e2e-tests-nsdeletetest-88g64 deletion completed in 6.219752229s • [SLOW TEST:48.239 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:02:30.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 8 11:02:30.784: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"804114e9-4a62-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00120a812), BlockOwnerDeletion:(*bool)(0xc00120a813)}} Feb 8 11:02:30.806: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"8031fb49-4a62-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00080958a), BlockOwnerDeletion:(*bool)(0xc00080958b)}} Feb 8 11:02:30.926: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"8033f019-4a62-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00120aa3a), BlockOwnerDeletion:(*bool)(0xc00120aa3b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:02:35.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-fxh56" for this suite. Feb 8 11:02:42.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:02:42.103: INFO: namespace: e2e-tests-gc-fxh56, resource: bindings, ignored listing per whitelist Feb 8 11:02:42.213: INFO: namespace e2e-tests-gc-fxh56 deletion completed in 6.21975079s • [SLOW TEST:11.813 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:02:42.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 8 11:02:53.006: INFO: Successfully updated pod "pod-update-activedeadlineseconds-872924d4-4a62-11ea-95d6-0242ac110005" Feb 8 11:02:53.006: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-872924d4-4a62-11ea-95d6-0242ac110005" in namespace "e2e-tests-pods-zk44s" to be "terminated due to deadline exceeded" Feb 8 11:02:53.239: INFO: Pod "pod-update-activedeadlineseconds-872924d4-4a62-11ea-95d6-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 233.116608ms Feb 8 11:02:55.252: INFO: Pod "pod-update-activedeadlineseconds-872924d4-4a62-11ea-95d6-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.246046872s Feb 8 11:02:55.252: INFO: Pod "pod-update-activedeadlineseconds-872924d4-4a62-11ea-95d6-0242ac110005" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:02:55.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-zk44s" for this suite. Feb 8 11:03:01.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:03:01.978: INFO: namespace: e2e-tests-pods-zk44s, resource: bindings, ignored listing per whitelist Feb 8 11:03:02.121: INFO: namespace e2e-tests-pods-zk44s deletion completed in 6.860208743s • [SLOW TEST:19.908 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:03:02.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-sgd2f Feb 8 11:03:10.364: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-sgd2f STEP: checking the pod's current state and verifying that restartCount is present Feb 8 11:03:10.370: INFO: Initial restart count of pod liveness-exec is 0 Feb 8 11:04:05.034: INFO: Restart count of pod e2e-tests-container-probe-sgd2f/liveness-exec is now 1 (54.663602695s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:04:05.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-sgd2f" for this suite. Feb 8 11:04:13.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:04:13.435: INFO: namespace: e2e-tests-container-probe-sgd2f, resource: bindings, ignored listing per whitelist Feb 8 11:04:13.473: INFO: namespace e2e-tests-container-probe-sgd2f deletion completed in 8.25168269s • [SLOW TEST:71.352 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:04:13.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 8 11:04:24.788: INFO: Successfully updated pod "labelsupdatebdca1587-4a62-11ea-95d6-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:04:26.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cdssb" for this suite. Feb 8 11:04:47.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:04:47.154: INFO: namespace: e2e-tests-projected-cdssb, resource: bindings, ignored listing per whitelist Feb 8 11:04:47.183: INFO: namespace e2e-tests-projected-cdssb deletion completed in 20.228288682s • [SLOW TEST:33.708 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:04:47.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-d1b67eaf-4a62-11ea-95d6-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 8 11:04:47.600: INFO: Waiting up to 5m0s for pod "pod-configmaps-d1b73786-4a62-11ea-95d6-0242ac110005" in namespace "e2e-tests-configmap-qk7fl" to be "success or failure" Feb 8 11:04:47.639: INFO: Pod "pod-configmaps-d1b73786-4a62-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.118873ms Feb 8 11:04:49.651: INFO: Pod "pod-configmaps-d1b73786-4a62-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050769198s Feb 8 11:04:51.673: INFO: Pod "pod-configmaps-d1b73786-4a62-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072317648s Feb 8 11:04:53.692: INFO: Pod "pod-configmaps-d1b73786-4a62-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091759898s Feb 8 11:04:55.708: INFO: Pod "pod-configmaps-d1b73786-4a62-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107354718s Feb 8 11:04:57.719: INFO: Pod "pod-configmaps-d1b73786-4a62-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118907491s STEP: Saw pod success Feb 8 11:04:57.720: INFO: Pod "pod-configmaps-d1b73786-4a62-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 11:04:57.724: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d1b73786-4a62-11ea-95d6-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 8 11:04:58.083: INFO: Waiting for pod pod-configmaps-d1b73786-4a62-11ea-95d6-0242ac110005 to disappear Feb 8 11:04:58.096: INFO: Pod pod-configmaps-d1b73786-4a62-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:04:58.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qk7fl" for this suite. Feb 8 11:05:06.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:05:06.212: INFO: namespace: e2e-tests-configmap-qk7fl, resource: bindings, ignored listing per whitelist Feb 8 11:05:06.318: INFO: namespace e2e-tests-configmap-qk7fl deletion completed in 8.206940954s • [SLOW TEST:19.135 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:05:06.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:05:16.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-k4mvx" for this suite. Feb 8 11:06:06.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:06:06.998: INFO: namespace: e2e-tests-kubelet-test-k4mvx, resource: bindings, ignored listing per whitelist Feb 8 11:06:07.087: INFO: namespace e2e-tests-kubelet-test-k4mvx deletion completed in 50.213822603s • [SLOW TEST:60.769 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:06:07.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-015f3af4-4a63-11ea-95d6-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-015f3bab-4a63-11ea-95d6-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-015f3af4-4a63-11ea-95d6-0242ac110005 STEP: Updating configmap cm-test-opt-upd-015f3bab-4a63-11ea-95d6-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-015f3bcb-4a63-11ea-95d6-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:07:52.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4f57p" for this suite. Feb 8 11:08:18.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:08:18.796: INFO: namespace: e2e-tests-configmap-4f57p, resource: bindings, ignored listing per whitelist Feb 8 11:08:18.890: INFO: namespace e2e-tests-configmap-4f57p deletion completed in 26.215948889s • [SLOW TEST:131.802 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:08:18.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 8 11:08:30.238: INFO: Waiting up to 5m0s for pod "client-envvars-56859398-4a63-11ea-95d6-0242ac110005" in namespace "e2e-tests-pods-fd6zq" to be "success or failure" Feb 8 11:08:30.282: INFO: Pod "client-envvars-56859398-4a63-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 43.507185ms Feb 8 11:08:32.296: INFO: Pod "client-envvars-56859398-4a63-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056887535s Feb 8 11:08:34.355: INFO: Pod "client-envvars-56859398-4a63-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116057801s Feb 8 11:08:36.374: INFO: Pod "client-envvars-56859398-4a63-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134811427s Feb 8 11:08:38.401: INFO: Pod "client-envvars-56859398-4a63-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.162581168s STEP: Saw pod success Feb 8 11:08:38.402: INFO: Pod "client-envvars-56859398-4a63-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 11:08:38.408: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-56859398-4a63-11ea-95d6-0242ac110005 container env3cont: STEP: delete the pod Feb 8 11:08:38.567: INFO: Waiting for pod client-envvars-56859398-4a63-11ea-95d6-0242ac110005 to disappear Feb 8 11:08:38.582: INFO: Pod client-envvars-56859398-4a63-11ea-95d6-0242ac110005 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:08:38.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-fd6zq" for this suite. Feb 8 11:09:24.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:09:24.740: INFO: namespace: e2e-tests-pods-fd6zq, resource: bindings, ignored listing per whitelist Feb 8 11:09:24.871: INFO: namespace e2e-tests-pods-fd6zq deletion completed in 46.225048167s • [SLOW TEST:65.982 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:09:24.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-tgk7b [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-tgk7b STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-tgk7b Feb 8 11:09:25.152: INFO: Found 0 stateful pods, waiting for 1 Feb 8 11:09:35.184: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Feb 8 11:09:45.183: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 8 11:09:45.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 8 11:09:45.984: INFO: stderr: "I0208 11:09:45.425151 66 log.go:172] (0xc00073a370) (0xc000758640) Create stream\nI0208 11:09:45.425390 66 log.go:172] (0xc00073a370) (0xc000758640) Stream added, broadcasting: 1\nI0208 11:09:45.431923 66 log.go:172] (0xc00073a370) Reply frame received for 1\nI0208 11:09:45.431984 66 log.go:172] (0xc00073a370) (0xc0005bebe0) Create stream\nI0208 11:09:45.431998 66 log.go:172] (0xc00073a370) (0xc0005bebe0) Stream added, broadcasting: 3\nI0208 11:09:45.434015 66 log.go:172] (0xc00073a370) Reply frame received for 3\nI0208 11:09:45.434062 66 log.go:172] (0xc00073a370) (0xc000340000) Create stream\nI0208 11:09:45.434087 66 log.go:172] (0xc00073a370) (0xc000340000) Stream added, broadcasting: 5\nI0208 11:09:45.435409 66 log.go:172] (0xc00073a370) Reply frame received for 5\nI0208 11:09:45.739537 66 log.go:172] (0xc00073a370) Data frame received for 3\nI0208 11:09:45.739613 66 log.go:172] (0xc0005bebe0) (3) Data frame handling\nI0208 11:09:45.739638 66 log.go:172] (0xc0005bebe0) (3) Data frame sent\nI0208 11:09:45.967471 66 log.go:172] (0xc00073a370) Data frame received for 1\nI0208 11:09:45.967739 66 log.go:172] (0xc000758640) (1) Data frame handling\nI0208 11:09:45.967800 66 log.go:172] (0xc000758640) (1) Data frame sent\nI0208 11:09:45.967835 66 log.go:172] (0xc00073a370) (0xc000758640) Stream removed, broadcasting: 1\nI0208 11:09:45.968285 66 log.go:172] (0xc00073a370) (0xc0005bebe0) Stream removed, broadcasting: 3\nI0208 11:09:45.968621 66 log.go:172] (0xc00073a370) (0xc000340000) Stream removed, broadcasting: 5\nI0208 11:09:45.968934 66 log.go:172] (0xc00073a370) (0xc000758640) Stream removed, broadcasting: 1\nI0208 11:09:45.969045 66 log.go:172] (0xc00073a370) (0xc0005bebe0) Stream removed, broadcasting: 3\nI0208 11:09:45.969108 66 log.go:172] (0xc00073a370) (0xc000340000) Stream removed, broadcasting: 5\n" Feb 8 11:09:45.985: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 8 11:09:45.985: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 8 11:09:46.008: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 8 11:09:56.024: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 8 11:09:56.024: INFO: Waiting for statefulset status.replicas updated to 0 Feb 8 11:09:56.063: INFO: POD NODE PHASE GRACE CONDITIONS Feb 8 11:09:56.063: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:25 +0000 UTC }] Feb 8 11:09:56.063: INFO: Feb 8 11:09:56.063: INFO: StatefulSet ss has not reached scale 3, at 1 Feb 8 11:09:57.335: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986043665s Feb 8 11:09:58.428: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.714460236s Feb 8 11:09:59.467: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.621269858s Feb 8 11:10:00.481: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.582259805s Feb 8 11:10:01.524: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.56819424s Feb 8 11:10:03.031: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.525227632s Feb 8 11:10:04.048: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.01857758s Feb 8 11:10:05.082: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.001391396s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-tgk7b Feb 8 11:10:06.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:10:06.762: INFO: stderr: "I0208 11:10:06.371464 88 log.go:172] (0xc0006f8370) (0xc00059f400) Create stream\nI0208 11:10:06.371703 88 log.go:172] (0xc0006f8370) (0xc00059f400) Stream added, broadcasting: 1\nI0208 11:10:06.380886 88 log.go:172] (0xc0006f8370) Reply frame received for 1\nI0208 11:10:06.380924 88 log.go:172] (0xc0006f8370) (0xc00059f4a0) Create stream\nI0208 11:10:06.380942 88 log.go:172] (0xc0006f8370) (0xc00059f4a0) Stream added, broadcasting: 3\nI0208 11:10:06.382307 88 log.go:172] (0xc0006f8370) Reply frame received for 3\nI0208 11:10:06.382334 88 log.go:172] (0xc0006f8370) (0xc0005c6000) Create stream\nI0208 11:10:06.382343 88 log.go:172] (0xc0006f8370) (0xc0005c6000) Stream added, broadcasting: 5\nI0208 11:10:06.383939 88 log.go:172] (0xc0006f8370) Reply frame received for 5\nI0208 11:10:06.591885 88 log.go:172] (0xc0006f8370) Data frame received for 3\nI0208 11:10:06.592049 88 log.go:172] (0xc00059f4a0) (3) Data frame handling\nI0208 11:10:06.592093 88 log.go:172] (0xc00059f4a0) (3) Data frame sent\nI0208 11:10:06.750852 88 log.go:172] (0xc0006f8370) Data frame received for 1\nI0208 11:10:06.751011 88 log.go:172] (0xc0006f8370) (0xc0005c6000) Stream removed, broadcasting: 5\nI0208 11:10:06.751067 88 log.go:172] (0xc00059f400) (1) Data frame handling\nI0208 11:10:06.751108 88 log.go:172] (0xc00059f400) (1) Data frame sent\nI0208 11:10:06.751308 88 log.go:172] (0xc0006f8370) (0xc00059f4a0) Stream removed, broadcasting: 3\nI0208 11:10:06.751360 88 log.go:172] (0xc0006f8370) (0xc00059f400) Stream removed, broadcasting: 1\nI0208 11:10:06.751372 88 log.go:172] (0xc0006f8370) Go away received\nI0208 11:10:06.752062 88 log.go:172] (0xc0006f8370) (0xc00059f400) Stream removed, broadcasting: 1\nI0208 11:10:06.752073 88 log.go:172] (0xc0006f8370) (0xc00059f4a0) Stream removed, broadcasting: 3\nI0208 11:10:06.752077 88 log.go:172] (0xc0006f8370) (0xc0005c6000) Stream removed, broadcasting: 5\n" Feb 8 11:10:06.762: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 8 11:10:06.762: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 8 11:10:06.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:10:07.440: INFO: stderr: "I0208 11:10:06.963542 111 log.go:172] (0xc0001f64d0) (0xc000677220) Create stream\nI0208 11:10:06.963752 111 log.go:172] (0xc0001f64d0) (0xc000677220) Stream added, broadcasting: 1\nI0208 11:10:06.968938 111 log.go:172] (0xc0001f64d0) Reply frame received for 1\nI0208 11:10:06.968978 111 log.go:172] (0xc0001f64d0) (0xc000754000) Create stream\nI0208 11:10:06.968987 111 log.go:172] (0xc0001f64d0) (0xc000754000) Stream added, broadcasting: 3\nI0208 11:10:06.969946 111 log.go:172] (0xc0001f64d0) Reply frame received for 3\nI0208 11:10:06.969990 111 log.go:172] (0xc0001f64d0) (0xc0002a2000) Create stream\nI0208 11:10:06.970006 111 log.go:172] (0xc0001f64d0) (0xc0002a2000) Stream added, broadcasting: 5\nI0208 11:10:06.971022 111 log.go:172] (0xc0001f64d0) Reply frame received for 5\nI0208 11:10:07.130187 111 log.go:172] (0xc0001f64d0) Data frame received for 5\nI0208 11:10:07.130352 111 log.go:172] (0xc0002a2000) (5) Data frame handling\nI0208 11:10:07.130429 111 log.go:172] (0xc0002a2000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0208 11:10:07.130642 111 log.go:172] (0xc0001f64d0) Data frame received for 3\nI0208 11:10:07.130738 111 log.go:172] (0xc000754000) (3) Data frame handling\nI0208 11:10:07.130807 111 log.go:172] (0xc000754000) (3) Data frame sent\nI0208 11:10:07.421819 111 log.go:172] (0xc0001f64d0) (0xc000754000) Stream removed, broadcasting: 3\nI0208 11:10:07.422126 111 log.go:172] (0xc0001f64d0) (0xc0002a2000) Stream removed, broadcasting: 5\nI0208 11:10:07.422255 111 log.go:172] (0xc0001f64d0) Data frame received for 1\nI0208 11:10:07.422279 111 log.go:172] (0xc000677220) (1) Data frame handling\nI0208 11:10:07.422363 111 log.go:172] (0xc000677220) (1) Data frame sent\nI0208 11:10:07.422376 111 log.go:172] (0xc0001f64d0) (0xc000677220) Stream removed, broadcasting: 1\nI0208 11:10:07.422396 111 log.go:172] (0xc0001f64d0) Go away received\nI0208 11:10:07.423480 111 log.go:172] (0xc0001f64d0) (0xc000677220) Stream removed, broadcasting: 1\nI0208 11:10:07.423535 111 log.go:172] (0xc0001f64d0) (0xc000754000) Stream removed, broadcasting: 3\nI0208 11:10:07.423551 111 log.go:172] (0xc0001f64d0) (0xc0002a2000) Stream removed, broadcasting: 5\n" Feb 8 11:10:07.440: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 8 11:10:07.440: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 8 11:10:07.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:10:08.344: INFO: stderr: "I0208 11:10:07.871203 133 log.go:172] (0xc000704370) (0xc0007b4640) Create stream\nI0208 11:10:07.871549 133 log.go:172] (0xc000704370) (0xc0007b4640) Stream added, broadcasting: 1\nI0208 11:10:07.884553 133 log.go:172] (0xc000704370) Reply frame received for 1\nI0208 11:10:07.884599 133 log.go:172] (0xc000704370) (0xc0005f8d20) Create stream\nI0208 11:10:07.884610 133 log.go:172] (0xc000704370) (0xc0005f8d20) Stream added, broadcasting: 3\nI0208 11:10:07.885637 133 log.go:172] (0xc000704370) Reply frame received for 3\nI0208 11:10:07.885667 133 log.go:172] (0xc000704370) (0xc0007b46e0) Create stream\nI0208 11:10:07.885677 133 log.go:172] (0xc000704370) (0xc0007b46e0) Stream added, broadcasting: 5\nI0208 11:10:07.886847 133 log.go:172] (0xc000704370) Reply frame received for 5\nI0208 11:10:08.096114 133 log.go:172] (0xc000704370) Data frame received for 5\nI0208 11:10:08.096251 133 log.go:172] (0xc0007b46e0) (5) Data frame handling\nI0208 11:10:08.096276 133 log.go:172] (0xc0007b46e0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0208 11:10:08.096301 133 log.go:172] (0xc000704370) Data frame received for 3\nI0208 11:10:08.096311 133 log.go:172] (0xc0005f8d20) (3) Data frame handling\nI0208 11:10:08.096334 133 log.go:172] (0xc0005f8d20) (3) Data frame sent\nI0208 11:10:08.327535 133 log.go:172] (0xc000704370) Data frame received for 1\nI0208 11:10:08.327659 133 log.go:172] (0xc000704370) (0xc0007b46e0) Stream removed, broadcasting: 5\nI0208 11:10:08.327706 133 log.go:172] (0xc0007b4640) (1) Data frame handling\nI0208 11:10:08.327742 133 log.go:172] (0xc0007b4640) (1) Data frame sent\nI0208 11:10:08.327850 133 log.go:172] (0xc000704370) (0xc0007b4640) Stream removed, broadcasting: 1\nI0208 11:10:08.328146 133 log.go:172] (0xc000704370) (0xc0005f8d20) Stream removed, broadcasting: 3\nI0208 11:10:08.328334 133 log.go:172] (0xc000704370) Go away received\nI0208 11:10:08.328712 133 log.go:172] (0xc000704370) (0xc0007b4640) Stream removed, broadcasting: 1\nI0208 11:10:08.328740 133 log.go:172] (0xc000704370) (0xc0005f8d20) Stream removed, broadcasting: 3\nI0208 11:10:08.328751 133 log.go:172] (0xc000704370) (0xc0007b46e0) Stream removed, broadcasting: 5\n" Feb 8 11:10:08.344: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 8 11:10:08.344: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 8 11:10:08.377: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 8 11:10:08.377: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 8 11:10:08.377: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 8 11:10:08.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 8 11:10:09.137: INFO: stderr: "I0208 11:10:08.720240 155 log.go:172] (0xc00070e370) (0xc000730640) Create stream\nI0208 11:10:08.720687 155 log.go:172] (0xc00070e370) (0xc000730640) Stream added, broadcasting: 1\nI0208 11:10:08.771155 155 log.go:172] (0xc00070e370) Reply frame received for 1\nI0208 11:10:08.771306 155 log.go:172] (0xc00070e370) (0xc0005c8b40) Create stream\nI0208 11:10:08.771354 155 log.go:172] (0xc00070e370) (0xc0005c8b40) Stream added, broadcasting: 3\nI0208 11:10:08.781122 155 log.go:172] (0xc00070e370) Reply frame received for 3\nI0208 11:10:08.781146 155 log.go:172] (0xc00070e370) (0xc0007306e0) Create stream\nI0208 11:10:08.781171 155 log.go:172] (0xc00070e370) (0xc0007306e0) Stream added, broadcasting: 5\nI0208 11:10:08.782651 155 log.go:172] (0xc00070e370) Reply frame received for 5\nI0208 11:10:09.000482 155 log.go:172] (0xc00070e370) Data frame received for 3\nI0208 11:10:09.000580 155 log.go:172] (0xc0005c8b40) (3) Data frame handling\nI0208 11:10:09.000597 155 log.go:172] (0xc0005c8b40) (3) Data frame sent\nI0208 11:10:09.121089 155 log.go:172] (0xc00070e370) Data frame received for 1\nI0208 11:10:09.121441 155 log.go:172] (0xc000730640) (1) Data frame handling\nI0208 11:10:09.121545 155 log.go:172] (0xc000730640) (1) Data frame sent\nI0208 11:10:09.122724 155 log.go:172] (0xc00070e370) (0xc000730640) Stream removed, broadcasting: 1\nI0208 11:10:09.124061 155 log.go:172] (0xc00070e370) (0xc0005c8b40) Stream removed, broadcasting: 3\nI0208 11:10:09.124264 155 log.go:172] (0xc00070e370) (0xc0007306e0) Stream removed, broadcasting: 5\nI0208 11:10:09.124368 155 log.go:172] (0xc00070e370) (0xc000730640) Stream removed, broadcasting: 1\nI0208 11:10:09.124415 155 log.go:172] (0xc00070e370) (0xc0005c8b40) Stream removed, broadcasting: 3\nI0208 11:10:09.124437 155 log.go:172] (0xc00070e370) (0xc0007306e0) Stream removed, broadcasting: 5\n" Feb 8 11:10:09.137: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 8 11:10:09.137: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 8 11:10:09.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 8 11:10:09.704: INFO: stderr: "I0208 11:10:09.327754 177 log.go:172] (0xc0006ee2c0) (0xc000710640) Create stream\nI0208 11:10:09.327942 177 log.go:172] (0xc0006ee2c0) (0xc000710640) Stream added, broadcasting: 1\nI0208 11:10:09.332691 177 log.go:172] (0xc0006ee2c0) Reply frame received for 1\nI0208 11:10:09.332719 177 log.go:172] (0xc0006ee2c0) (0xc00065cc80) Create stream\nI0208 11:10:09.332724 177 log.go:172] (0xc0006ee2c0) (0xc00065cc80) Stream added, broadcasting: 3\nI0208 11:10:09.333761 177 log.go:172] (0xc0006ee2c0) Reply frame received for 3\nI0208 11:10:09.333792 177 log.go:172] (0xc0006ee2c0) (0xc0007106e0) Create stream\nI0208 11:10:09.333799 177 log.go:172] (0xc0006ee2c0) (0xc0007106e0) Stream added, broadcasting: 5\nI0208 11:10:09.334704 177 log.go:172] (0xc0006ee2c0) Reply frame received for 5\nI0208 11:10:09.555291 177 log.go:172] (0xc0006ee2c0) Data frame received for 3\nI0208 11:10:09.555379 177 log.go:172] (0xc00065cc80) (3) Data frame handling\nI0208 11:10:09.555406 177 log.go:172] (0xc00065cc80) (3) Data frame sent\nI0208 11:10:09.692924 177 log.go:172] (0xc0006ee2c0) (0xc0007106e0) Stream removed, broadcasting: 5\nI0208 11:10:09.693168 177 log.go:172] (0xc0006ee2c0) Data frame received for 1\nI0208 11:10:09.693218 177 log.go:172] (0xc0006ee2c0) (0xc00065cc80) Stream removed, broadcasting: 3\nI0208 11:10:09.693268 177 log.go:172] (0xc000710640) (1) Data frame handling\nI0208 11:10:09.693280 177 log.go:172] (0xc000710640) (1) Data frame sent\nI0208 11:10:09.693292 177 log.go:172] (0xc0006ee2c0) (0xc000710640) Stream removed, broadcasting: 1\nI0208 11:10:09.693308 177 log.go:172] (0xc0006ee2c0) Go away received\nI0208 11:10:09.694034 177 log.go:172] (0xc0006ee2c0) (0xc000710640) Stream removed, broadcasting: 1\nI0208 11:10:09.694082 177 log.go:172] (0xc0006ee2c0) (0xc00065cc80) Stream removed, broadcasting: 3\nI0208 11:10:09.694103 177 log.go:172] (0xc0006ee2c0) (0xc0007106e0) Stream removed, broadcasting: 5\n" Feb 8 11:10:09.704: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 8 11:10:09.705: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 8 11:10:09.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 8 11:10:10.336: INFO: stderr: "I0208 11:10:09.904901 199 log.go:172] (0xc0006dc370) (0xc000700640) Create stream\nI0208 11:10:09.905005 199 log.go:172] (0xc0006dc370) (0xc000700640) Stream added, broadcasting: 1\nI0208 11:10:09.910824 199 log.go:172] (0xc0006dc370) Reply frame received for 1\nI0208 11:10:09.910902 199 log.go:172] (0xc0006dc370) (0xc0007006e0) Create stream\nI0208 11:10:09.910911 199 log.go:172] (0xc0006dc370) (0xc0007006e0) Stream added, broadcasting: 3\nI0208 11:10:09.912068 199 log.go:172] (0xc0006dc370) Reply frame received for 3\nI0208 11:10:09.912092 199 log.go:172] (0xc0006dc370) (0xc000774dc0) Create stream\nI0208 11:10:09.912098 199 log.go:172] (0xc0006dc370) (0xc000774dc0) Stream added, broadcasting: 5\nI0208 11:10:09.913230 199 log.go:172] (0xc0006dc370) Reply frame received for 5\nI0208 11:10:10.119568 199 log.go:172] (0xc0006dc370) Data frame received for 3\nI0208 11:10:10.119827 199 log.go:172] (0xc0007006e0) (3) Data frame handling\nI0208 11:10:10.119893 199 log.go:172] (0xc0007006e0) (3) Data frame sent\nI0208 11:10:10.327015 199 log.go:172] (0xc0006dc370) Data frame received for 1\nI0208 11:10:10.327166 199 log.go:172] (0xc0006dc370) (0xc0007006e0) Stream removed, broadcasting: 3\nI0208 11:10:10.327200 199 log.go:172] (0xc000700640) (1) Data frame handling\nI0208 11:10:10.327212 199 log.go:172] (0xc000700640) (1) Data frame sent\nI0208 11:10:10.327287 199 log.go:172] (0xc0006dc370) (0xc000774dc0) Stream removed, broadcasting: 5\nI0208 11:10:10.327368 199 log.go:172] (0xc0006dc370) (0xc000700640) Stream removed, broadcasting: 1\nI0208 11:10:10.327405 199 log.go:172] (0xc0006dc370) Go away received\nI0208 11:10:10.328197 199 log.go:172] (0xc0006dc370) (0xc000700640) Stream removed, broadcasting: 1\nI0208 11:10:10.328388 199 log.go:172] (0xc0006dc370) (0xc0007006e0) Stream removed, broadcasting: 3\nI0208 11:10:10.328408 199 log.go:172] (0xc0006dc370) (0xc000774dc0) Stream removed, broadcasting: 5\n" Feb 8 11:10:10.336: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 8 11:10:10.336: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 8 11:10:10.336: INFO: Waiting for statefulset status.replicas updated to 0 Feb 8 11:10:10.357: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 8 11:10:10.357: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 8 11:10:10.357: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 8 11:10:10.383: INFO: POD NODE PHASE GRACE CONDITIONS Feb 8 11:10:10.383: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:25 +0000 UTC }] Feb 8 11:10:10.383: INFO: ss-1 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC }] Feb 8 11:10:10.383: INFO: ss-2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC }] Feb 8 11:10:10.383: INFO: Feb 8 11:10:10.383: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 8 11:10:12.255: INFO: POD NODE PHASE GRACE CONDITIONS Feb 8 11:10:12.255: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:25 +0000 UTC }] Feb 8 11:10:12.256: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC }] Feb 8 11:10:12.256: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC }] Feb 8 11:10:12.256: INFO: Feb 8 11:10:12.256: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 8 11:10:13.280: INFO: POD NODE PHASE GRACE CONDITIONS Feb 8 11:10:13.280: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:25 +0000 UTC }] Feb 8 11:10:13.280: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC }] Feb 8 11:10:13.280: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC }] Feb 8 11:10:13.281: INFO: Feb 8 11:10:13.281: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 8 11:10:14.308: INFO: POD NODE PHASE GRACE CONDITIONS Feb 8 11:10:14.308: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:25 +0000 UTC }] Feb 8 11:10:14.309: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC }] Feb 8 11:10:14.309: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC }] Feb 8 11:10:14.309: INFO: Feb 8 11:10:14.309: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 8 11:10:15.826: INFO: POD NODE PHASE GRACE CONDITIONS Feb 8 11:10:15.827: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:25 +0000 UTC }] Feb 8 11:10:15.827: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC }] Feb 8 11:10:15.827: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC }] Feb 8 11:10:15.827: INFO: Feb 8 11:10:15.827: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 8 11:10:17.091: INFO: POD NODE PHASE GRACE CONDITIONS Feb 8 11:10:17.091: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:25 +0000 UTC }] Feb 8 11:10:17.091: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC }] Feb 8 11:10:17.091: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC }] Feb 8 11:10:17.091: INFO: Feb 8 11:10:17.091: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 8 11:10:18.549: INFO: POD NODE PHASE GRACE CONDITIONS Feb 8 11:10:18.550: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:25 +0000 UTC }] Feb 8 11:10:18.550: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC }] Feb 8 11:10:18.550: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC }] Feb 8 11:10:18.551: INFO: Feb 8 11:10:18.551: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 8 11:10:19.566: INFO: POD NODE PHASE GRACE CONDITIONS Feb 8 11:10:19.566: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:25 +0000 UTC }] Feb 8 11:10:19.566: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC }] Feb 8 11:10:19.566: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:09:56 +0000 UTC }] Feb 8 11:10:19.566: INFO: Feb 8 11:10:19.566: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-tgk7b Feb 8 11:10:20.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:10:20.971: INFO: rc: 1 Feb 8 11:10:20.971: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001676660 exit status 1 true [0xc001418158 0xc001418170 0xc001418188] [0xc001418158 0xc001418170 0xc001418188] [0xc001418168 0xc001418180] [0x935700 0x935700] 0xc001b85020 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 8 11:10:30.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:10:31.229: INFO: rc: 1 Feb 8 11:10:31.229: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000ed2330 exit status 1 true [0xc0012060a8 0xc0012060c0 0xc0012060d8] [0xc0012060a8 0xc0012060c0 0xc0012060d8] [0xc0012060b8 0xc0012060d0] [0x935700 0x935700] 0xc0021ca9c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:10:41.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:10:41.429: INFO: rc: 1 Feb 8 11:10:41.429: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00157b4a0 exit status 1 true [0xc000bce118 0xc000bce130 0xc000bce148] [0xc000bce118 0xc000bce130 0xc000bce148] [0xc000bce128 0xc000bce140] [0x935700 0x935700] 0xc001855260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:10:51.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:10:51.723: INFO: rc: 1 Feb 8 11:10:51.724: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00157b620 exit status 1 true [0xc000bce150 0xc000bce168 0xc000bce180] [0xc000bce150 0xc000bce168 0xc000bce180] [0xc000bce160 0xc000bce178] [0x935700 0x935700] 0xc001855500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:11:01.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:11:01.910: INFO: rc: 1 Feb 8 11:11:01.910: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016767b0 exit status 1 true [0xc001418190 0xc0014181a8 0xc0014181c0] [0xc001418190 0xc0014181a8 0xc0014181c0] [0xc0014181a0 0xc0014181b8] [0x935700 0x935700] 0xc001b852c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:11:11.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:11:12.047: INFO: rc: 1 Feb 8 11:11:12.047: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001676900 exit status 1 true [0xc0014181c8 0xc0014181e8 0xc001418200] [0xc0014181c8 0xc0014181e8 0xc001418200] [0xc0014181e0 0xc0014181f8] [0x935700 0x935700] 0xc001b85560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:11:22.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:11:22.179: INFO: rc: 1 Feb 8 11:11:22.179: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001676a50 exit status 1 true [0xc001418208 0xc001418220 0xc001418238] [0xc001418208 0xc001418220 0xc001418238] [0xc001418218 0xc001418230] [0x935700 0x935700] 0xc001b85800 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:11:32.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:11:32.378: INFO: rc: 1 Feb 8 11:11:32.379: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001db2840 exit status 1 true [0xc001b98070 0xc001b98088 0xc001b980a0] [0xc001b98070 0xc001b98088 0xc001b980a0] [0xc001b98080 0xc001b98098] [0x935700 0x935700] 0xc0020ee9c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:11:42.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:11:42.539: INFO: rc: 1 Feb 8 11:11:42.539: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001676ba0 exit status 1 true [0xc001418240 0xc001418258 0xc001418270] [0xc001418240 0xc001418258 0xc001418270] [0xc001418250 0xc001418268] [0x935700 0x935700] 0xc001b85aa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:11:52.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:11:52.698: INFO: rc: 1 Feb 8 11:11:52.698: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00157b800 exit status 1 true [0xc000bce188 0xc000bce1a0 0xc000bce1b8] [0xc000bce188 0xc000bce1a0 0xc000bce1b8] [0xc000bce198 0xc000bce1b0] [0x935700 0x935700] 0xc002210000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:12:02.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:12:02.791: INFO: rc: 1 Feb 8 11:12:02.791: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001677350 exit status 1 true [0xc001418278 0xc001418290 0xc0014182a8] [0xc001418278 0xc001418290 0xc0014182a8] [0xc001418288 0xc0014182a0] [0x935700 0x935700] 0xc001b85d40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:12:12.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:12:12.947: INFO: rc: 1 Feb 8 11:12:12.947: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00127cb10 exit status 1 true [0xc001206000 0xc001206028 0xc001206040] [0xc001206000 0xc001206028 0xc001206040] [0xc001206020 0xc001206038] [0x935700 0x935700] 0xc001854780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:12:22.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:12:23.127: INFO: rc: 1 Feb 8 11:12:23.128: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002152120 exit status 1 true [0xc000bce000 0xc000bce018 0xc000bce038] [0xc000bce000 0xc000bce018 0xc000bce038] [0xc000bce010 0xc000bce028] [0x935700 0x935700] 0xc0021ca1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:12:33.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:12:33.307: INFO: rc: 1 Feb 8 11:12:33.307: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0005999e0 exit status 1 true [0xc001b98000 0xc001b98018 0xc001b98030] [0xc001b98000 0xc001b98018 0xc001b98030] [0xc001b98010 0xc001b98028] [0x935700 0x935700] 0xc002211380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:12:43.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:12:43.473: INFO: rc: 1 Feb 8 11:12:43.473: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000ed2150 exit status 1 true [0xc001418000 0xc001418018 0xc001418030] [0xc001418000 0xc001418018 0xc001418030] [0xc001418010 0xc001418028] [0x935700 0x935700] 0xc0020ee300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:12:53.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:12:53.662: INFO: rc: 1 Feb 8 11:12:53.663: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000ed2300 exit status 1 true [0xc001418038 0xc001418050 0xc001418068] [0xc001418038 0xc001418050 0xc001418068] [0xc001418048 0xc001418060] [0x935700 0x935700] 0xc0020ee5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:13:03.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:13:03.790: INFO: rc: 1 Feb 8 11:13:03.791: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002152270 exit status 1 true [0xc000bce050 0xc000bce068 0xc000bce080] [0xc000bce050 0xc000bce068 0xc000bce080] [0xc000bce060 0xc000bce078] [0x935700 0x935700] 0xc0021ca480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:13:13.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:13:13.925: INFO: rc: 1 Feb 8 11:13:13.926: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00127cc30 exit status 1 true [0xc001206048 0xc001206078 0xc001206090] [0xc001206048 0xc001206078 0xc001206090] [0xc001206070 0xc001206088] [0x935700 0x935700] 0xc001854c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:13:23.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:13:24.067: INFO: rc: 1 Feb 8 11:13:24.067: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00127ce10 exit status 1 true [0xc001206098 0xc0012060b8 0xc0012060d0] [0xc001206098 0xc0012060b8 0xc0012060d0] [0xc0012060b0 0xc0012060c8] [0x935700 0x935700] 0xc001854f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:13:34.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:13:34.165: INFO: rc: 1 Feb 8 11:13:34.165: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000ed24b0 exit status 1 true [0xc001418070 0xc001418088 0xc0014180a8] [0xc001418070 0xc001418088 0xc0014180a8] [0xc001418080 0xc001418098] [0x935700 0x935700] 0xc0020ee840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:13:44.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:13:44.374: INFO: rc: 1 Feb 8 11:13:44.374: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000ed2630 exit status 1 true [0xc0014180b0 0xc0014180c8 0xc0014180e0] [0xc0014180b0 0xc0014180c8 0xc0014180e0] [0xc0014180c0 0xc0014180d8] [0x935700 0x935700] 0xc0020eeae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:13:54.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:13:54.538: INFO: rc: 1 Feb 8 11:13:54.539: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00127cfc0 exit status 1 true [0xc0012060d8 0xc0012060f0 0xc001206108] [0xc0012060d8 0xc0012060f0 0xc001206108] [0xc0012060e8 0xc001206100] [0x935700 0x935700] 0xc0018551a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:14:04.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:14:04.668: INFO: rc: 1 Feb 8 11:14:04.668: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00127d110 exit status 1 true [0xc001206110 0xc001206128 0xc001206140] [0xc001206110 0xc001206128 0xc001206140] [0xc001206120 0xc001206138] [0x935700 0x935700] 0xc001855440 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:14:14.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:14:14.751: INFO: rc: 1 Feb 8 11:14:14.751: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00127cb40 exit status 1 true [0xc001206000 0xc001206028 0xc001206040] [0xc001206000 0xc001206028 0xc001206040] [0xc001206020 0xc001206038] [0x935700 0x935700] 0xc001854780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:14:24.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:14:24.897: INFO: rc: 1 Feb 8 11:14:24.897: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0005999b0 exit status 1 true [0xc001b98000 0xc001b98018 0xc001b98030] [0xc001b98000 0xc001b98018 0xc001b98030] [0xc001b98010 0xc001b98028] [0x935700 0x935700] 0xc002211380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:14:34.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:14:35.016: INFO: rc: 1 Feb 8 11:14:35.016: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00127cd50 exit status 1 true [0xc001206048 0xc001206078 0xc001206090] [0xc001206048 0xc001206078 0xc001206090] [0xc001206070 0xc001206088] [0x935700 0x935700] 0xc001854c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:14:45.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:14:45.197: INFO: rc: 1 Feb 8 11:14:45.198: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00127cf00 exit status 1 true [0xc001206098 0xc0012060b8 0xc0012060d0] [0xc001206098 0xc0012060b8 0xc0012060d0] [0xc0012060b0 0xc0012060c8] [0x935700 0x935700] 0xc001854f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:14:55.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:14:55.385: INFO: rc: 1 Feb 8 11:14:55.386: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002152150 exit status 1 true [0xc000bce000 0xc000bce018 0xc000bce038] [0xc000bce000 0xc000bce018 0xc000bce038] [0xc000bce010 0xc000bce028] [0x935700 0x935700] 0xc0021ca1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:15:05.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:15:05.601: INFO: rc: 1 Feb 8 11:15:05.601: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000599b30 exit status 1 true [0xc001b98038 0xc001b98050 0xc001b98068] [0xc001b98038 0xc001b98050 0xc001b98068] [0xc001b98048 0xc001b98060] [0x935700 0x935700] 0xc002211740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:15:15.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:15:15.754: INFO: rc: 1 Feb 8 11:15:15.754: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000599cb0 exit status 1 true [0xc001b98070 0xc001b98088 0xc001b980a0] [0xc001b98070 0xc001b98088 0xc001b980a0] [0xc001b98080 0xc001b98098] [0x935700 0x935700] 0xc0022119e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 8 11:15:25.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tgk7b ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 8 11:15:25.929: INFO: rc: 1 Feb 8 11:15:25.929: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Feb 8 11:15:25.929: INFO: Scaling statefulset ss to 0 Feb 8 11:15:25.953: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 8 11:15:25.956: INFO: Deleting all statefulset in ns e2e-tests-statefulset-tgk7b Feb 8 11:15:25.959: INFO: Scaling statefulset ss to 0 Feb 8 11:15:25.969: INFO: Waiting for statefulset status.replicas updated to 0 Feb 8 11:15:25.971: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:15:26.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-tgk7b" for this suite. Feb 8 11:15:32.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:15:32.302: INFO: namespace: e2e-tests-statefulset-tgk7b, resource: bindings, ignored listing per whitelist Feb 8 11:15:32.318: INFO: namespace e2e-tests-statefulset-tgk7b deletion completed in 6.309525615s • [SLOW TEST:367.447 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:15:32.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 8 11:15:50.815: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 8 11:15:50.825: INFO: Pod pod-with-prestop-exec-hook still exists Feb 8 11:15:52.825: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 8 11:15:52.958: INFO: Pod pod-with-prestop-exec-hook still exists Feb 8 11:15:54.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 8 11:15:54.846: INFO: Pod pod-with-prestop-exec-hook still exists Feb 8 11:15:56.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 8 11:15:56.904: INFO: Pod pod-with-prestop-exec-hook still exists Feb 8 11:15:58.825: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 8 11:15:58.854: INFO: Pod pod-with-prestop-exec-hook still exists Feb 8 11:16:00.825: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 8 11:16:00.840: INFO: Pod pod-with-prestop-exec-hook still exists Feb 8 11:16:02.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 8 11:16:02.945: INFO: Pod pod-with-prestop-exec-hook still exists Feb 8 11:16:04.825: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 8 11:16:04.840: INFO: Pod pod-with-prestop-exec-hook still exists Feb 8 11:16:06.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 8 11:16:06.868: INFO: Pod pod-with-prestop-exec-hook still exists Feb 8 11:16:08.825: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 8 11:16:08.864: INFO: Pod pod-with-prestop-exec-hook still exists Feb 8 11:16:10.825: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 8 11:16:10.838: INFO: Pod pod-with-prestop-exec-hook still exists Feb 8 11:16:12.825: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 8 11:16:12.842: INFO: Pod pod-with-prestop-exec-hook still exists Feb 8 11:16:14.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 8 11:16:14.851: INFO: Pod pod-with-prestop-exec-hook still exists Feb 8 11:16:16.825: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 8 11:16:16.873: INFO: Pod pod-with-prestop-exec-hook still exists Feb 8 11:16:18.825: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 8 11:16:18.841: INFO: Pod pod-with-prestop-exec-hook still exists Feb 8 11:16:20.825: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 8 11:16:20.839: INFO: Pod pod-with-prestop-exec-hook still exists Feb 8 11:16:22.825: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 8 11:16:22.842: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:16:22.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-dngfv" for this suite. Feb 8 11:16:47.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:16:47.354: INFO: namespace: e2e-tests-container-lifecycle-hook-dngfv, resource: bindings, ignored listing per whitelist Feb 8 11:16:47.390: INFO: namespace e2e-tests-container-lifecycle-hook-dngfv deletion completed in 24.496054482s • [SLOW TEST:75.071 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:16:47.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-7f029827-4a64-11ea-95d6-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 8 11:16:47.650: INFO: Waiting up to 5m0s for pod "pod-configmaps-7f040bec-4a64-11ea-95d6-0242ac110005" in namespace "e2e-tests-configmap-fzfk7" to be "success or failure" Feb 8 11:16:47.665: INFO: Pod "pod-configmaps-7f040bec-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.330799ms Feb 8 11:16:49.678: INFO: Pod "pod-configmaps-7f040bec-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028646993s Feb 8 11:16:51.692: INFO: Pod "pod-configmaps-7f040bec-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042688272s Feb 8 11:16:53.716: INFO: Pod "pod-configmaps-7f040bec-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066552784s Feb 8 11:16:56.485: INFO: Pod "pod-configmaps-7f040bec-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.835786257s Feb 8 11:16:58.541: INFO: Pod "pod-configmaps-7f040bec-4a64-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.891081903s STEP: Saw pod success Feb 8 11:16:58.541: INFO: Pod "pod-configmaps-7f040bec-4a64-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 11:16:58.568: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7f040bec-4a64-11ea-95d6-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 8 11:16:58.917: INFO: Waiting for pod pod-configmaps-7f040bec-4a64-11ea-95d6-0242ac110005 to disappear Feb 8 11:16:58.937: INFO: Pod pod-configmaps-7f040bec-4a64-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:16:58.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fzfk7" for this suite. Feb 8 11:17:07.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:17:07.040: INFO: namespace: e2e-tests-configmap-fzfk7, resource: bindings, ignored listing per whitelist Feb 8 11:17:07.178: INFO: namespace e2e-tests-configmap-fzfk7 deletion completed in 8.226226231s • [SLOW TEST:19.788 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:17:07.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-8ad60b7c-4a64-11ea-95d6-0242ac110005 STEP: Creating a pod to test consume secrets Feb 8 11:17:07.559: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8ad7aace-4a64-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-nfdgk" to be "success or failure" Feb 8 11:17:07.577: INFO: Pod "pod-projected-secrets-8ad7aace-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.161231ms Feb 8 11:17:09.598: INFO: Pod "pod-projected-secrets-8ad7aace-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039249132s Feb 8 11:17:11.622: INFO: Pod "pod-projected-secrets-8ad7aace-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062736654s Feb 8 11:17:13.799: INFO: Pod "pod-projected-secrets-8ad7aace-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.239906793s Feb 8 11:17:15.863: INFO: Pod "pod-projected-secrets-8ad7aace-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.303843689s Feb 8 11:17:17.883: INFO: Pod "pod-projected-secrets-8ad7aace-4a64-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.324298848s STEP: Saw pod success Feb 8 11:17:17.883: INFO: Pod "pod-projected-secrets-8ad7aace-4a64-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 11:17:17.887: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-8ad7aace-4a64-11ea-95d6-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 8 11:17:18.043: INFO: Waiting for pod pod-projected-secrets-8ad7aace-4a64-11ea-95d6-0242ac110005 to disappear Feb 8 11:17:18.109: INFO: Pod pod-projected-secrets-8ad7aace-4a64-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:17:18.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nfdgk" for this suite. Feb 8 11:17:24.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:17:24.423: INFO: namespace: e2e-tests-projected-nfdgk, resource: bindings, ignored listing per whitelist Feb 8 11:17:24.456: INFO: namespace e2e-tests-projected-nfdgk deletion completed in 6.33803439s • [SLOW TEST:17.278 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:17:24.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-950c04e1-4a64-11ea-95d6-0242ac110005 STEP: Creating a pod to test consume secrets Feb 8 11:17:24.669: INFO: Waiting up to 5m0s for pod "pod-secrets-950cc83f-4a64-11ea-95d6-0242ac110005" in namespace "e2e-tests-secrets-z77zd" to be "success or failure" Feb 8 11:17:24.688: INFO: Pod "pod-secrets-950cc83f-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.122115ms Feb 8 11:17:26.703: INFO: Pod "pod-secrets-950cc83f-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034246716s Feb 8 11:17:28.729: INFO: Pod "pod-secrets-950cc83f-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0603243s Feb 8 11:17:30.761: INFO: Pod "pod-secrets-950cc83f-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091960873s Feb 8 11:17:33.277: INFO: Pod "pod-secrets-950cc83f-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.608567701s Feb 8 11:17:35.379: INFO: Pod "pod-secrets-950cc83f-4a64-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.710128457s STEP: Saw pod success Feb 8 11:17:35.379: INFO: Pod "pod-secrets-950cc83f-4a64-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 11:17:35.388: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-950cc83f-4a64-11ea-95d6-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 8 11:17:35.919: INFO: Waiting for pod pod-secrets-950cc83f-4a64-11ea-95d6-0242ac110005 to disappear Feb 8 11:17:35.970: INFO: Pod pod-secrets-950cc83f-4a64-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:17:35.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-z77zd" for this suite. Feb 8 11:17:42.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:17:42.741: INFO: namespace: e2e-tests-secrets-z77zd, resource: bindings, ignored listing per whitelist Feb 8 11:17:42.752: INFO: namespace e2e-tests-secrets-z77zd deletion completed in 6.660472899s • [SLOW TEST:18.296 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:17:42.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 8 11:17:43.014: INFO: Waiting up to 5m0s for pod "pod-9ff27203-4a64-11ea-95d6-0242ac110005" in namespace "e2e-tests-emptydir-gfvdd" to be "success or failure" Feb 8 11:17:43.032: INFO: Pod "pod-9ff27203-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.092706ms Feb 8 11:17:45.787: INFO: Pod "pod-9ff27203-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.77271023s Feb 8 11:17:47.810: INFO: Pod "pod-9ff27203-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.796337171s Feb 8 11:17:50.144: INFO: Pod "pod-9ff27203-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.130186314s Feb 8 11:17:52.168: INFO: Pod "pod-9ff27203-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.154037231s Feb 8 11:17:54.189: INFO: Pod "pod-9ff27203-4a64-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.175494578s STEP: Saw pod success Feb 8 11:17:54.189: INFO: Pod "pod-9ff27203-4a64-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 11:17:54.201: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9ff27203-4a64-11ea-95d6-0242ac110005 container test-container: STEP: delete the pod Feb 8 11:17:54.422: INFO: Waiting for pod pod-9ff27203-4a64-11ea-95d6-0242ac110005 to disappear Feb 8 11:17:54.429: INFO: Pod pod-9ff27203-4a64-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:17:54.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-gfvdd" for this suite. Feb 8 11:18:00.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:18:00.665: INFO: namespace: e2e-tests-emptydir-gfvdd, resource: bindings, ignored listing per whitelist Feb 8 11:18:00.668: INFO: namespace e2e-tests-emptydir-gfvdd deletion completed in 6.229270569s • [SLOW TEST:17.916 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:18:00.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 8 11:18:00.981: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aab8eaa3-4a64-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-z5fzn" to be "success or failure" Feb 8 11:18:00.992: INFO: Pod "downwardapi-volume-aab8eaa3-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.236383ms Feb 8 11:18:03.005: INFO: Pod "downwardapi-volume-aab8eaa3-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02460649s Feb 8 11:18:05.015: INFO: Pod "downwardapi-volume-aab8eaa3-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033968014s Feb 8 11:18:07.033: INFO: Pod "downwardapi-volume-aab8eaa3-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052231124s Feb 8 11:18:09.047: INFO: Pod "downwardapi-volume-aab8eaa3-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066186791s Feb 8 11:18:11.060: INFO: Pod "downwardapi-volume-aab8eaa3-4a64-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079702746s STEP: Saw pod success Feb 8 11:18:11.061: INFO: Pod "downwardapi-volume-aab8eaa3-4a64-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 11:18:11.065: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-aab8eaa3-4a64-11ea-95d6-0242ac110005 container client-container: STEP: delete the pod Feb 8 11:18:11.310: INFO: Waiting for pod downwardapi-volume-aab8eaa3-4a64-11ea-95d6-0242ac110005 to disappear Feb 8 11:18:11.597: INFO: Pod downwardapi-volume-aab8eaa3-4a64-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:18:11.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z5fzn" for this suite. Feb 8 11:18:17.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:18:17.879: INFO: namespace: e2e-tests-projected-z5fzn, resource: bindings, ignored listing per whitelist Feb 8 11:18:17.898: INFO: namespace e2e-tests-projected-z5fzn deletion completed in 6.276582893s • [SLOW TEST:17.228 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:18:17.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Feb 8 11:18:18.134: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:18:18.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5pgmp" for this suite. Feb 8 11:18:24.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:18:24.431: INFO: namespace: e2e-tests-kubectl-5pgmp, resource: bindings, ignored listing per whitelist Feb 8 11:18:24.542: INFO: namespace e2e-tests-kubectl-5pgmp deletion completed in 6.256949562s • [SLOW TEST:6.644 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:18:24.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Feb 8 11:18:24.775: INFO: Waiting up to 5m0s for pod "var-expansion-b8e4a04d-4a64-11ea-95d6-0242ac110005" in namespace "e2e-tests-var-expansion-444l4" to be "success or failure" Feb 8 11:18:24.790: INFO: Pod "var-expansion-b8e4a04d-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.527265ms Feb 8 11:18:26.825: INFO: Pod "var-expansion-b8e4a04d-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049954729s Feb 8 11:18:28.844: INFO: Pod "var-expansion-b8e4a04d-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069155471s Feb 8 11:18:31.129: INFO: Pod "var-expansion-b8e4a04d-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.35418861s Feb 8 11:18:33.368: INFO: Pod "var-expansion-b8e4a04d-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.593373751s Feb 8 11:18:35.893: INFO: Pod "var-expansion-b8e4a04d-4a64-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.118497479s STEP: Saw pod success Feb 8 11:18:35.893: INFO: Pod "var-expansion-b8e4a04d-4a64-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 11:18:35.901: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-b8e4a04d-4a64-11ea-95d6-0242ac110005 container dapi-container: STEP: delete the pod Feb 8 11:18:36.958: INFO: Waiting for pod var-expansion-b8e4a04d-4a64-11ea-95d6-0242ac110005 to disappear Feb 8 11:18:37.068: INFO: Pod var-expansion-b8e4a04d-4a64-11ea-95d6-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:18:37.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-444l4" for this suite. Feb 8 11:18:43.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:18:43.179: INFO: namespace: e2e-tests-var-expansion-444l4, resource: bindings, ignored listing per whitelist Feb 8 11:18:43.282: INFO: namespace e2e-tests-var-expansion-444l4 deletion completed in 6.202860842s • [SLOW TEST:18.740 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:18:43.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 8 11:18:43.487: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:18:44.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-4xnkq" for this suite. Feb 8 11:18:50.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:18:50.854: INFO: namespace: e2e-tests-custom-resource-definition-4xnkq, resource: bindings, ignored listing per whitelist Feb 8 11:18:50.866: INFO: namespace e2e-tests-custom-resource-definition-4xnkq deletion completed in 6.267247378s • [SLOW TEST:7.584 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:18:50.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 8 11:18:51.226: INFO: Waiting up to 5m0s for pod "downward-api-c8aa09d4-4a64-11ea-95d6-0242ac110005" in namespace "e2e-tests-downward-api-kblxt" to be "success or failure" Feb 8 11:18:51.243: INFO: Pod "downward-api-c8aa09d4-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.20945ms Feb 8 11:18:53.735: INFO: Pod "downward-api-c8aa09d4-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.509146143s Feb 8 11:18:55.764: INFO: Pod "downward-api-c8aa09d4-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.537575945s Feb 8 11:18:57.827: INFO: Pod "downward-api-c8aa09d4-4a64-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.60100596s Feb 8 11:19:00.279: INFO: Pod "downward-api-c8aa09d4-4a64-11ea-95d6-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 9.052629817s Feb 8 11:19:02.293: INFO: Pod "downward-api-c8aa09d4-4a64-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.066810029s STEP: Saw pod success Feb 8 11:19:02.293: INFO: Pod "downward-api-c8aa09d4-4a64-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 11:19:02.303: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-c8aa09d4-4a64-11ea-95d6-0242ac110005 container dapi-container: STEP: delete the pod Feb 8 11:19:02.469: INFO: Waiting for pod downward-api-c8aa09d4-4a64-11ea-95d6-0242ac110005 to disappear Feb 8 11:19:02.486: INFO: Pod downward-api-c8aa09d4-4a64-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:19:02.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-kblxt" for this suite. Feb 8 11:19:08.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:19:08.735: INFO: namespace: e2e-tests-downward-api-kblxt, resource: bindings, ignored listing per whitelist Feb 8 11:19:08.817: INFO: namespace e2e-tests-downward-api-kblxt deletion completed in 6.305065603s • [SLOW TEST:17.950 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:19:08.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 8 11:19:09.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-pctzv' Feb 8 11:19:10.956: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 8 11:19:10.956: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Feb 8 11:19:15.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-pctzv' Feb 8 11:19:15.685: INFO: stderr: "" Feb 8 11:19:15.685: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:19:15.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pctzv" for this suite. Feb 8 11:19:39.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:19:40.061: INFO: namespace: e2e-tests-kubectl-pctzv, resource: bindings, ignored listing per whitelist Feb 8 11:19:40.080: INFO: namespace e2e-tests-kubectl-pctzv deletion completed in 24.38689821s • [SLOW TEST:31.263 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:19:40.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Feb 8 11:19:49.955: INFO: 10 pods remaining Feb 8 11:19:49.956: INFO: 10 pods has nil DeletionTimestamp Feb 8 11:19:49.956: INFO: Feb 8 11:19:51.546: INFO: 10 pods remaining Feb 8 11:19:51.546: INFO: 0 pods has nil DeletionTimestamp Feb 8 11:19:51.546: INFO: Feb 8 11:19:51.954: INFO: 0 pods remaining Feb 8 11:19:51.954: INFO: 0 pods has nil DeletionTimestamp Feb 8 11:19:51.954: INFO: STEP: Gathering metrics W0208 11:19:52.661016 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 8 11:19:52.661: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:19:52.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-b9gpm" for this suite. Feb 8 11:20:06.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:20:06.839: INFO: namespace: e2e-tests-gc-b9gpm, resource: bindings, ignored listing per whitelist Feb 8 11:20:06.858: INFO: namespace e2e-tests-gc-b9gpm deletion completed in 14.187239234s • [SLOW TEST:26.777 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:20:06.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 8 11:20:18.136: INFO: Successfully updated pod "labelsupdatef61bd208-4a64-11ea-95d6-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:20:20.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6qxw6" for this suite. Feb 8 11:20:44.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:20:44.429: INFO: namespace: e2e-tests-downward-api-6qxw6, resource: bindings, ignored listing per whitelist Feb 8 11:20:44.696: INFO: namespace e2e-tests-downward-api-6qxw6 deletion completed in 24.438916619s • [SLOW TEST:37.837 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:20:44.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 8 11:20:44.897: INFO: Waiting up to 5m0s for pod "pod-0c6bf1f4-4a65-11ea-95d6-0242ac110005" in namespace "e2e-tests-emptydir-d6vmx" to be "success or failure" Feb 8 11:20:44.917: INFO: Pod "pod-0c6bf1f4-4a65-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.460256ms Feb 8 11:20:46.938: INFO: Pod "pod-0c6bf1f4-4a65-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040705976s Feb 8 11:20:48.952: INFO: Pod "pod-0c6bf1f4-4a65-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055233462s Feb 8 11:20:51.383: INFO: Pod "pod-0c6bf1f4-4a65-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.485797068s Feb 8 11:20:53.665: INFO: Pod "pod-0c6bf1f4-4a65-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.767430906s Feb 8 11:20:55.803: INFO: Pod "pod-0c6bf1f4-4a65-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.905455104s STEP: Saw pod success Feb 8 11:20:55.803: INFO: Pod "pod-0c6bf1f4-4a65-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 11:20:55.809: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0c6bf1f4-4a65-11ea-95d6-0242ac110005 container test-container: STEP: delete the pod Feb 8 11:20:56.146: INFO: Waiting for pod pod-0c6bf1f4-4a65-11ea-95d6-0242ac110005 to disappear Feb 8 11:20:56.175: INFO: Pod pod-0c6bf1f4-4a65-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:20:56.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-d6vmx" for this suite. Feb 8 11:21:02.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:21:02.372: INFO: namespace: e2e-tests-emptydir-d6vmx, resource: bindings, ignored listing per whitelist Feb 8 11:21:02.458: INFO: namespace e2e-tests-emptydir-d6vmx deletion completed in 6.26788336s • [SLOW TEST:17.762 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:21:02.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 8 11:21:02.833: INFO: Waiting up to 5m0s for pod "downwardapi-volume-17199a2c-4a65-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-s9nl9" to be "success or failure" Feb 8 11:21:02.864: INFO: Pod "downwardapi-volume-17199a2c-4a65-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.332197ms Feb 8 11:21:05.141: INFO: Pod "downwardapi-volume-17199a2c-4a65-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.307722477s Feb 8 11:21:07.160: INFO: Pod "downwardapi-volume-17199a2c-4a65-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326904372s Feb 8 11:21:09.180: INFO: Pod "downwardapi-volume-17199a2c-4a65-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.347410611s Feb 8 11:21:11.191: INFO: Pod "downwardapi-volume-17199a2c-4a65-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.358397387s Feb 8 11:21:13.205: INFO: Pod "downwardapi-volume-17199a2c-4a65-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.371795977s STEP: Saw pod success Feb 8 11:21:13.205: INFO: Pod "downwardapi-volume-17199a2c-4a65-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 11:21:13.210: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-17199a2c-4a65-11ea-95d6-0242ac110005 container client-container: STEP: delete the pod Feb 8 11:21:13.813: INFO: Waiting for pod downwardapi-volume-17199a2c-4a65-11ea-95d6-0242ac110005 to disappear Feb 8 11:21:13.825: INFO: Pod downwardapi-volume-17199a2c-4a65-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:21:13.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-s9nl9" for this suite. Feb 8 11:21:19.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:21:20.133: INFO: namespace: e2e-tests-projected-s9nl9, resource: bindings, ignored listing per whitelist Feb 8 11:21:20.133: INFO: namespace e2e-tests-projected-s9nl9 deletion completed in 6.297426126s • [SLOW TEST:17.675 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:21:20.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 8 11:21:20.384: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:21:37.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-k4j4s" for this suite. Feb 8 11:21:45.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:21:45.460: INFO: namespace: e2e-tests-init-container-k4j4s, resource: bindings, ignored listing per whitelist Feb 8 11:21:45.572: INFO: namespace e2e-tests-init-container-k4j4s deletion completed in 8.21051899s • [SLOW TEST:25.439 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:21:45.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-nfwcn Feb 8 11:21:55.790: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-nfwcn STEP: checking the pod's current state and verifying that restartCount is present Feb 8 11:21:55.796: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:25:57.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-nfwcn" for this suite. Feb 8 11:26:05.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:26:05.609: INFO: namespace: e2e-tests-container-probe-nfwcn, resource: bindings, ignored listing per whitelist Feb 8 11:26:05.918: INFO: namespace e2e-tests-container-probe-nfwcn deletion completed in 8.469029867s • [SLOW TEST:260.346 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:26:05.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:27:06.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-sm4b9" for this suite. Feb 8 11:27:30.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:27:30.415: INFO: namespace: e2e-tests-container-probe-sm4b9, resource: bindings, ignored listing per whitelist Feb 8 11:27:30.419: INFO: namespace e2e-tests-container-probe-sm4b9 deletion completed in 24.223843561s • [SLOW TEST:84.500 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:27:30.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 8 11:27:30.735: INFO: Waiting up to 5m0s for pod "downward-api-fe53162e-4a65-11ea-95d6-0242ac110005" in namespace "e2e-tests-downward-api-st8cp" to be "success or failure" Feb 8 11:27:30.744: INFO: Pod "downward-api-fe53162e-4a65-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.266698ms Feb 8 11:27:32.771: INFO: Pod "downward-api-fe53162e-4a65-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036159721s Feb 8 11:27:34.787: INFO: Pod "downward-api-fe53162e-4a65-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051483944s Feb 8 11:27:37.431: INFO: Pod "downward-api-fe53162e-4a65-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.696099097s Feb 8 11:27:39.465: INFO: Pod "downward-api-fe53162e-4a65-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.729321557s Feb 8 11:27:41.520: INFO: Pod "downward-api-fe53162e-4a65-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.785026606s STEP: Saw pod success Feb 8 11:27:41.520: INFO: Pod "downward-api-fe53162e-4a65-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 11:27:41.564: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-fe53162e-4a65-11ea-95d6-0242ac110005 container dapi-container: STEP: delete the pod Feb 8 11:27:42.266: INFO: Waiting for pod downward-api-fe53162e-4a65-11ea-95d6-0242ac110005 to disappear Feb 8 11:27:42.760: INFO: Pod downward-api-fe53162e-4a65-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:27:42.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-st8cp" for this suite. Feb 8 11:27:50.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:27:50.924: INFO: namespace: e2e-tests-downward-api-st8cp, resource: bindings, ignored listing per whitelist Feb 8 11:27:51.048: INFO: namespace e2e-tests-downward-api-st8cp deletion completed in 8.269013181s • [SLOW TEST:20.629 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:27:51.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-0a883fe0-4a66-11ea-95d6-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 8 11:27:51.285: INFO: Waiting up to 5m0s for pod "pod-configmaps-0a88e694-4a66-11ea-95d6-0242ac110005" in namespace "e2e-tests-configmap-hwcsm" to be "success or failure" Feb 8 11:27:51.311: INFO: Pod "pod-configmaps-0a88e694-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.215249ms Feb 8 11:27:53.730: INFO: Pod "pod-configmaps-0a88e694-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.44457759s Feb 8 11:27:55.744: INFO: Pod "pod-configmaps-0a88e694-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.458485351s Feb 8 11:27:57.759: INFO: Pod "pod-configmaps-0a88e694-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.47435661s Feb 8 11:28:00.042: INFO: Pod "pod-configmaps-0a88e694-4a66-11ea-95d6-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.757227525s Feb 8 11:28:02.067: INFO: Pod "pod-configmaps-0a88e694-4a66-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.782315982s STEP: Saw pod success Feb 8 11:28:02.068: INFO: Pod "pod-configmaps-0a88e694-4a66-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 11:28:02.073: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-0a88e694-4a66-11ea-95d6-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 8 11:28:03.104: INFO: Waiting for pod pod-configmaps-0a88e694-4a66-11ea-95d6-0242ac110005 to disappear Feb 8 11:28:03.115: INFO: Pod pod-configmaps-0a88e694-4a66-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:28:03.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hwcsm" for this suite. Feb 8 11:28:09.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:28:09.490: INFO: namespace: e2e-tests-configmap-hwcsm, resource: bindings, ignored listing per whitelist Feb 8 11:28:09.529: INFO: namespace e2e-tests-configmap-hwcsm deletion completed in 6.399760402s • [SLOW TEST:18.481 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:28:09.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-159fb816-4a66-11ea-95d6-0242ac110005 Feb 8 11:28:09.831: INFO: Pod name my-hostname-basic-159fb816-4a66-11ea-95d6-0242ac110005: Found 0 pods out of 1 Feb 8 11:28:15.370: INFO: Pod name my-hostname-basic-159fb816-4a66-11ea-95d6-0242ac110005: Found 1 pods out of 1 Feb 8 11:28:15.370: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-159fb816-4a66-11ea-95d6-0242ac110005" are running Feb 8 11:28:19.392: INFO: Pod "my-hostname-basic-159fb816-4a66-11ea-95d6-0242ac110005-jht7k" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 11:28:09 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 11:28:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-159fb816-4a66-11ea-95d6-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 11:28:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-159fb816-4a66-11ea-95d6-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 11:28:09 +0000 UTC Reason: Message:}]) Feb 8 11:28:19.392: INFO: Trying to dial the pod Feb 8 11:28:24.420: INFO: Controller my-hostname-basic-159fb816-4a66-11ea-95d6-0242ac110005: Got expected result from replica 1 [my-hostname-basic-159fb816-4a66-11ea-95d6-0242ac110005-jht7k]: "my-hostname-basic-159fb816-4a66-11ea-95d6-0242ac110005-jht7k", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:28:24.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-sfktc" for this suite. Feb 8 11:28:30.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:28:30.686: INFO: namespace: e2e-tests-replication-controller-sfktc, resource: bindings, ignored listing per whitelist Feb 8 11:28:30.692: INFO: namespace e2e-tests-replication-controller-sfktc deletion completed in 6.264801612s • [SLOW TEST:21.163 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:28:30.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-22268c38-4a66-11ea-95d6-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 8 11:28:30.888: INFO: Waiting up to 5m0s for pod "pod-configmaps-22284031-4a66-11ea-95d6-0242ac110005" in namespace "e2e-tests-configmap-kwgt6" to be "success or failure" Feb 8 11:28:30.918: INFO: Pod "pod-configmaps-22284031-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.755315ms Feb 8 11:28:33.412: INFO: Pod "pod-configmaps-22284031-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.523463586s Feb 8 11:28:35.426: INFO: Pod "pod-configmaps-22284031-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.537337137s Feb 8 11:28:37.892: INFO: Pod "pod-configmaps-22284031-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.003821071s Feb 8 11:28:40.011: INFO: Pod "pod-configmaps-22284031-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.122294533s Feb 8 11:28:42.031: INFO: Pod "pod-configmaps-22284031-4a66-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.142303956s STEP: Saw pod success Feb 8 11:28:42.031: INFO: Pod "pod-configmaps-22284031-4a66-11ea-95d6-0242ac110005" satisfied condition "success or failure" Feb 8 11:28:42.047: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-22284031-4a66-11ea-95d6-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 8 11:28:42.724: INFO: Waiting for pod pod-configmaps-22284031-4a66-11ea-95d6-0242ac110005 to disappear Feb 8 11:28:42.729: INFO: Pod pod-configmaps-22284031-4a66-11ea-95d6-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:28:42.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-kwgt6" for this suite. Feb 8 11:28:48.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:28:48.945: INFO: namespace: e2e-tests-configmap-kwgt6, resource: bindings, ignored listing per whitelist Feb 8 11:28:49.054: INFO: namespace e2e-tests-configmap-kwgt6 deletion completed in 6.318713869s • [SLOW TEST:18.362 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:28:49.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Feb 8 11:28:49.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:28:49.742: INFO: stderr: "" Feb 8 11:28:49.742: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 8 11:28:49.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:28:49.997: INFO: stderr: "" Feb 8 11:28:49.997: INFO: stdout: "update-demo-nautilus-2rcxd update-demo-nautilus-sgpqk " Feb 8 11:28:49.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rcxd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:28:50.199: INFO: stderr: "" Feb 8 11:28:50.199: INFO: stdout: "" Feb 8 11:28:50.199: INFO: update-demo-nautilus-2rcxd is created but not running Feb 8 11:28:55.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:28:55.366: INFO: stderr: "" Feb 8 11:28:55.366: INFO: stdout: "update-demo-nautilus-2rcxd update-demo-nautilus-sgpqk " Feb 8 11:28:55.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rcxd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:28:55.506: INFO: stderr: "" Feb 8 11:28:55.506: INFO: stdout: "" Feb 8 11:28:55.506: INFO: update-demo-nautilus-2rcxd is created but not running Feb 8 11:29:00.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:00.685: INFO: stderr: "" Feb 8 11:29:00.685: INFO: stdout: "update-demo-nautilus-2rcxd update-demo-nautilus-sgpqk " Feb 8 11:29:00.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rcxd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:00.832: INFO: stderr: "" Feb 8 11:29:00.832: INFO: stdout: "true" Feb 8 11:29:00.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rcxd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:00.937: INFO: stderr: "" Feb 8 11:29:00.937: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 8 11:29:00.937: INFO: validating pod update-demo-nautilus-2rcxd Feb 8 11:29:00.981: INFO: got data: { "image": "nautilus.jpg" } Feb 8 11:29:00.981: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 8 11:29:00.981: INFO: update-demo-nautilus-2rcxd is verified up and running Feb 8 11:29:00.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sgpqk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:01.158: INFO: stderr: "" Feb 8 11:29:01.158: INFO: stdout: "true" Feb 8 11:29:01.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sgpqk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:01.314: INFO: stderr: "" Feb 8 11:29:01.314: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 8 11:29:01.314: INFO: validating pod update-demo-nautilus-sgpqk Feb 8 11:29:01.322: INFO: got data: { "image": "nautilus.jpg" } Feb 8 11:29:01.323: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 8 11:29:01.323: INFO: update-demo-nautilus-sgpqk is verified up and running STEP: scaling down the replication controller Feb 8 11:29:01.325: INFO: scanned /root for discovery docs: Feb 8 11:29:01.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:02.552: INFO: stderr: "" Feb 8 11:29:02.552: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 8 11:29:02.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:02.805: INFO: stderr: "" Feb 8 11:29:02.805: INFO: stdout: "update-demo-nautilus-2rcxd update-demo-nautilus-sgpqk " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 8 11:29:07.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:08.078: INFO: stderr: "" Feb 8 11:29:08.078: INFO: stdout: "update-demo-nautilus-2rcxd update-demo-nautilus-sgpqk " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 8 11:29:13.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:14.709: INFO: stderr: "" Feb 8 11:29:14.709: INFO: stdout: "update-demo-nautilus-2rcxd " Feb 8 11:29:14.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rcxd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:14.850: INFO: stderr: "" Feb 8 11:29:14.851: INFO: stdout: "true" Feb 8 11:29:14.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rcxd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:15.124: INFO: stderr: "" Feb 8 11:29:15.124: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 8 11:29:15.124: INFO: validating pod update-demo-nautilus-2rcxd Feb 8 11:29:15.141: INFO: got data: { "image": "nautilus.jpg" } Feb 8 11:29:15.141: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 8 11:29:15.141: INFO: update-demo-nautilus-2rcxd is verified up and running STEP: scaling up the replication controller Feb 8 11:29:15.144: INFO: scanned /root for discovery docs: Feb 8 11:29:15.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:17.374: INFO: stderr: "" Feb 8 11:29:17.374: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 8 11:29:17.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:17.614: INFO: stderr: "" Feb 8 11:29:17.614: INFO: stdout: "update-demo-nautilus-2rcxd update-demo-nautilus-tq955 " Feb 8 11:29:17.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rcxd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:18.297: INFO: stderr: "" Feb 8 11:29:18.297: INFO: stdout: "true" Feb 8 11:29:18.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rcxd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:18.489: INFO: stderr: "" Feb 8 11:29:18.489: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 8 11:29:18.489: INFO: validating pod update-demo-nautilus-2rcxd Feb 8 11:29:18.508: INFO: got data: { "image": "nautilus.jpg" } Feb 8 11:29:18.508: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 8 11:29:18.508: INFO: update-demo-nautilus-2rcxd is verified up and running Feb 8 11:29:18.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tq955 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:18.729: INFO: stderr: "" Feb 8 11:29:18.729: INFO: stdout: "" Feb 8 11:29:18.729: INFO: update-demo-nautilus-tq955 is created but not running Feb 8 11:29:23.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:23.917: INFO: stderr: "" Feb 8 11:29:23.917: INFO: stdout: "update-demo-nautilus-2rcxd update-demo-nautilus-tq955 " Feb 8 11:29:23.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rcxd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:24.156: INFO: stderr: "" Feb 8 11:29:24.157: INFO: stdout: "true" Feb 8 11:29:24.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rcxd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:24.311: INFO: stderr: "" Feb 8 11:29:24.311: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 8 11:29:24.311: INFO: validating pod update-demo-nautilus-2rcxd Feb 8 11:29:24.356: INFO: got data: { "image": "nautilus.jpg" } Feb 8 11:29:24.356: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 8 11:29:24.356: INFO: update-demo-nautilus-2rcxd is verified up and running Feb 8 11:29:24.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tq955 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:24.561: INFO: stderr: "" Feb 8 11:29:24.561: INFO: stdout: "" Feb 8 11:29:24.561: INFO: update-demo-nautilus-tq955 is created but not running Feb 8 11:29:29.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:29.690: INFO: stderr: "" Feb 8 11:29:29.690: INFO: stdout: "update-demo-nautilus-2rcxd update-demo-nautilus-tq955 " Feb 8 11:29:29.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rcxd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:29.796: INFO: stderr: "" Feb 8 11:29:29.796: INFO: stdout: "true" Feb 8 11:29:29.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rcxd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:29.914: INFO: stderr: "" Feb 8 11:29:29.914: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 8 11:29:29.914: INFO: validating pod update-demo-nautilus-2rcxd Feb 8 11:29:29.931: INFO: got data: { "image": "nautilus.jpg" } Feb 8 11:29:29.931: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 8 11:29:29.931: INFO: update-demo-nautilus-2rcxd is verified up and running Feb 8 11:29:29.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tq955 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:30.071: INFO: stderr: "" Feb 8 11:29:30.072: INFO: stdout: "true" Feb 8 11:29:30.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tq955 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:30.216: INFO: stderr: "" Feb 8 11:29:30.216: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 8 11:29:30.216: INFO: validating pod update-demo-nautilus-tq955 Feb 8 11:29:30.224: INFO: got data: { "image": "nautilus.jpg" } Feb 8 11:29:30.224: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 8 11:29:30.224: INFO: update-demo-nautilus-tq955 is verified up and running STEP: using delete to clean up resources Feb 8 11:29:30.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:30.360: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 8 11:29:30.360: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 8 11:29:30.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-2qc7n' Feb 8 11:29:30.628: INFO: stderr: "No resources found.\n" Feb 8 11:29:30.629: INFO: stdout: "" Feb 8 11:29:30.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-2qc7n -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 8 11:29:30.805: INFO: stderr: "" Feb 8 11:29:30.805: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 8 11:29:30.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2qc7n" for this suite. Feb 8 11:29:46.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 11:29:47.140: INFO: namespace: e2e-tests-kubectl-2qc7n, resource: bindings, ignored listing per whitelist Feb 8 11:29:47.143: INFO: namespace e2e-tests-kubectl-2qc7n deletion completed in 16.315352363s • [SLOW TEST:58.088 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 8 11:29:47.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 8 11:29:47.349: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 11.883391ms)
Feb  8 11:29:47.353: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.809302ms)
Feb  8 11:29:47.358: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.289221ms)
Feb  8 11:29:47.362: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.712832ms)
Feb  8 11:29:47.366: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.467807ms)
Feb  8 11:29:47.370: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.0495ms)
Feb  8 11:29:47.376: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.689196ms)
Feb  8 11:29:47.427: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 51.287486ms)
Feb  8 11:29:47.433: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.02233ms)
Feb  8 11:29:47.438: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.791529ms)
Feb  8 11:29:47.443: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.459309ms)
Feb  8 11:29:47.449: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.076456ms)
Feb  8 11:29:47.455: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.425593ms)
Feb  8 11:29:47.461: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.016981ms)
Feb  8 11:29:47.469: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.746716ms)
Feb  8 11:29:47.475: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.894704ms)
Feb  8 11:29:47.480: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.852286ms)
Feb  8 11:29:47.484: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.784979ms)
Feb  8 11:29:47.489: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.062088ms)
Feb  8 11:29:47.495: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.394505ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:29:47.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-2ln57" for this suite.
Feb  8 11:29:53.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:29:53.790: INFO: namespace: e2e-tests-proxy-2ln57, resource: bindings, ignored listing per whitelist
Feb  8 11:29:53.932: INFO: namespace e2e-tests-proxy-2ln57 deletion completed in 6.432285202s

• [SLOW TEST:6.789 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:29:53.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  8 11:29:54.265: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53d436b6-4a66-11ea-95d6-0242ac110005" in namespace "e2e-tests-downward-api-gf5g8" to be "success or failure"
Feb  8 11:29:54.298: INFO: Pod "downwardapi-volume-53d436b6-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.321642ms
Feb  8 11:29:56.614: INFO: Pod "downwardapi-volume-53d436b6-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.34882267s
Feb  8 11:29:58.651: INFO: Pod "downwardapi-volume-53d436b6-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.385875236s
Feb  8 11:30:00.663: INFO: Pod "downwardapi-volume-53d436b6-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.397974936s
Feb  8 11:30:02.676: INFO: Pod "downwardapi-volume-53d436b6-4a66-11ea-95d6-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.411452094s
Feb  8 11:30:04.690: INFO: Pod "downwardapi-volume-53d436b6-4a66-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.425497922s
STEP: Saw pod success
Feb  8 11:30:04.690: INFO: Pod "downwardapi-volume-53d436b6-4a66-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 11:30:04.700: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-53d436b6-4a66-11ea-95d6-0242ac110005 container client-container: 
STEP: delete the pod
Feb  8 11:30:04.868: INFO: Waiting for pod downwardapi-volume-53d436b6-4a66-11ea-95d6-0242ac110005 to disappear
Feb  8 11:30:04.884: INFO: Pod downwardapi-volume-53d436b6-4a66-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:30:04.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gf5g8" for this suite.
Feb  8 11:30:10.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:30:10.953: INFO: namespace: e2e-tests-downward-api-gf5g8, resource: bindings, ignored listing per whitelist
Feb  8 11:30:11.152: INFO: namespace e2e-tests-downward-api-gf5g8 deletion completed in 6.263247807s

• [SLOW TEST:17.220 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:30:11.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Feb  8 11:30:11.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xvmjj'
Feb  8 11:30:11.966: INFO: stderr: ""
Feb  8 11:30:11.966: INFO: stdout: "pod/pause created\n"
Feb  8 11:30:11.966: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb  8 11:30:11.966: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-xvmjj" to be "running and ready"
Feb  8 11:30:11.999: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 32.49107ms
Feb  8 11:30:14.180: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213663286s
Feb  8 11:30:16.270: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.304327691s
Feb  8 11:30:18.289: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.322764966s
Feb  8 11:30:20.304: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.337975558s
Feb  8 11:30:22.318: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.351731891s
Feb  8 11:30:22.318: INFO: Pod "pause" satisfied condition "running and ready"
Feb  8 11:30:22.318: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Feb  8 11:30:22.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-xvmjj'
Feb  8 11:30:22.549: INFO: stderr: ""
Feb  8 11:30:22.549: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb  8 11:30:22.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-xvmjj'
Feb  8 11:30:22.724: INFO: stderr: ""
Feb  8 11:30:22.725: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb  8 11:30:22.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-xvmjj'
Feb  8 11:30:22.878: INFO: stderr: ""
Feb  8 11:30:22.878: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb  8 11:30:22.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-xvmjj'
Feb  8 11:30:23.004: INFO: stderr: ""
Feb  8 11:30:23.004: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Feb  8 11:30:23.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xvmjj'
Feb  8 11:30:23.168: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  8 11:30:23.168: INFO: stdout: "pod \"pause\" force deleted\n"
Feb  8 11:30:23.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-xvmjj'
Feb  8 11:30:23.327: INFO: stderr: "No resources found.\n"
Feb  8 11:30:23.327: INFO: stdout: ""
Feb  8 11:30:23.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-xvmjj -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  8 11:30:23.468: INFO: stderr: ""
Feb  8 11:30:23.469: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:30:23.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xvmjj" for this suite.
Feb  8 11:30:29.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:30:29.677: INFO: namespace: e2e-tests-kubectl-xvmjj, resource: bindings, ignored listing per whitelist
Feb  8 11:30:29.682: INFO: namespace e2e-tests-kubectl-xvmjj deletion completed in 6.201242573s

• [SLOW TEST:18.530 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:30:29.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  8 11:30:29.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:30:39.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-nrw7m" for this suite.
Feb  8 11:31:26.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:31:26.221: INFO: namespace: e2e-tests-pods-nrw7m, resource: bindings, ignored listing per whitelist
Feb  8 11:31:26.412: INFO: namespace e2e-tests-pods-nrw7m deletion completed in 46.458144042s

• [SLOW TEST:56.730 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:31:26.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  8 11:31:26.902: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b14e0a4-4a66-11ea-95d6-0242ac110005" in namespace "e2e-tests-downward-api-7s449" to be "success or failure"
Feb  8 11:31:26.908: INFO: Pod "downwardapi-volume-8b14e0a4-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099128ms
Feb  8 11:31:29.585: INFO: Pod "downwardapi-volume-8b14e0a4-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.683645164s
Feb  8 11:31:31.600: INFO: Pod "downwardapi-volume-8b14e0a4-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.69855926s
Feb  8 11:31:33.656: INFO: Pod "downwardapi-volume-8b14e0a4-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.754178783s
Feb  8 11:31:35.669: INFO: Pod "downwardapi-volume-8b14e0a4-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.767389978s
Feb  8 11:31:37.684: INFO: Pod "downwardapi-volume-8b14e0a4-4a66-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.782141996s
STEP: Saw pod success
Feb  8 11:31:37.684: INFO: Pod "downwardapi-volume-8b14e0a4-4a66-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 11:31:37.693: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8b14e0a4-4a66-11ea-95d6-0242ac110005 container client-container: 
STEP: delete the pod
Feb  8 11:31:37.744: INFO: Waiting for pod downwardapi-volume-8b14e0a4-4a66-11ea-95d6-0242ac110005 to disappear
Feb  8 11:31:37.762: INFO: Pod downwardapi-volume-8b14e0a4-4a66-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:31:37.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-7s449" for this suite.
Feb  8 11:31:43.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:31:45.324: INFO: namespace: e2e-tests-downward-api-7s449, resource: bindings, ignored listing per whitelist
Feb  8 11:31:45.371: INFO: namespace e2e-tests-downward-api-7s449 deletion completed in 7.532895139s

• [SLOW TEST:18.959 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:31:45.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  8 11:31:45.589: INFO: Waiting up to 5m0s for pod "downward-api-963a8ba6-4a66-11ea-95d6-0242ac110005" in namespace "e2e-tests-downward-api-fxfzw" to be "success or failure"
Feb  8 11:31:45.596: INFO: Pod "downward-api-963a8ba6-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.46954ms
Feb  8 11:31:47.918: INFO: Pod "downward-api-963a8ba6-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329165625s
Feb  8 11:31:49.948: INFO: Pod "downward-api-963a8ba6-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.359356006s
Feb  8 11:31:51.963: INFO: Pod "downward-api-963a8ba6-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.374140358s
Feb  8 11:31:54.032: INFO: Pod "downward-api-963a8ba6-4a66-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.443711673s
Feb  8 11:31:56.321: INFO: Pod "downward-api-963a8ba6-4a66-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.732056132s
STEP: Saw pod success
Feb  8 11:31:56.321: INFO: Pod "downward-api-963a8ba6-4a66-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 11:31:56.328: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-963a8ba6-4a66-11ea-95d6-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  8 11:31:56.585: INFO: Waiting for pod downward-api-963a8ba6-4a66-11ea-95d6-0242ac110005 to disappear
Feb  8 11:31:56.623: INFO: Pod downward-api-963a8ba6-4a66-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:31:56.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fxfzw" for this suite.
Feb  8 11:32:02.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:32:02.892: INFO: namespace: e2e-tests-downward-api-fxfzw, resource: bindings, ignored listing per whitelist
Feb  8 11:32:02.910: INFO: namespace e2e-tests-downward-api-fxfzw deletion completed in 6.279641843s

• [SLOW TEST:17.539 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:32:02.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-a0b54915-4a66-11ea-95d6-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-a0b549b8-4a66-11ea-95d6-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-a0b54915-4a66-11ea-95d6-0242ac110005
STEP: Updating configmap cm-test-opt-upd-a0b549b8-4a66-11ea-95d6-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-a0b54a0c-4a66-11ea-95d6-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:33:38.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rcffn" for this suite.
Feb  8 11:34:02.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:34:02.172: INFO: namespace: e2e-tests-projected-rcffn, resource: bindings, ignored listing per whitelist
Feb  8 11:34:02.282: INFO: namespace e2e-tests-projected-rcffn deletion completed in 24.260504923s

• [SLOW TEST:119.370 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:34:02.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-swckn
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  8 11:34:02.656: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  8 11:34:33.011: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-swckn PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 11:34:33.011: INFO: >>> kubeConfig: /root/.kube/config
I0208 11:34:33.076528       8 log.go:172] (0xc000aa44d0) (0xc000f21040) Create stream
I0208 11:34:33.076791       8 log.go:172] (0xc000aa44d0) (0xc000f21040) Stream added, broadcasting: 1
I0208 11:34:33.086634       8 log.go:172] (0xc000aa44d0) Reply frame received for 1
I0208 11:34:33.086714       8 log.go:172] (0xc000aa44d0) (0xc000fcc500) Create stream
I0208 11:34:33.086738       8 log.go:172] (0xc000aa44d0) (0xc000fcc500) Stream added, broadcasting: 3
I0208 11:34:33.088848       8 log.go:172] (0xc000aa44d0) Reply frame received for 3
I0208 11:34:33.088911       8 log.go:172] (0xc000aa44d0) (0xc001aa74a0) Create stream
I0208 11:34:33.088920       8 log.go:172] (0xc000aa44d0) (0xc001aa74a0) Stream added, broadcasting: 5
I0208 11:34:33.090092       8 log.go:172] (0xc000aa44d0) Reply frame received for 5
I0208 11:34:34.247391       8 log.go:172] (0xc000aa44d0) Data frame received for 3
I0208 11:34:34.247445       8 log.go:172] (0xc000fcc500) (3) Data frame handling
I0208 11:34:34.247471       8 log.go:172] (0xc000fcc500) (3) Data frame sent
I0208 11:34:34.509162       8 log.go:172] (0xc000aa44d0) (0xc000fcc500) Stream removed, broadcasting: 3
I0208 11:34:34.509556       8 log.go:172] (0xc000aa44d0) (0xc001aa74a0) Stream removed, broadcasting: 5
I0208 11:34:34.509651       8 log.go:172] (0xc000aa44d0) Data frame received for 1
I0208 11:34:34.509716       8 log.go:172] (0xc000f21040) (1) Data frame handling
I0208 11:34:34.509745       8 log.go:172] (0xc000f21040) (1) Data frame sent
I0208 11:34:34.509775       8 log.go:172] (0xc000aa44d0) (0xc000f21040) Stream removed, broadcasting: 1
I0208 11:34:34.509812       8 log.go:172] (0xc000aa44d0) Go away received
I0208 11:34:34.510112       8 log.go:172] (0xc000aa44d0) (0xc000f21040) Stream removed, broadcasting: 1
I0208 11:34:34.510139       8 log.go:172] (0xc000aa44d0) (0xc000fcc500) Stream removed, broadcasting: 3
I0208 11:34:34.510150       8 log.go:172] (0xc000aa44d0) (0xc001aa74a0) Stream removed, broadcasting: 5
Feb  8 11:34:34.510: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:34:34.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-swckn" for this suite.
Feb  8 11:34:48.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:34:48.789: INFO: namespace: e2e-tests-pod-network-test-swckn, resource: bindings, ignored listing per whitelist
Feb  8 11:34:48.823: INFO: namespace e2e-tests-pod-network-test-swckn deletion completed in 14.224258896s

• [SLOW TEST:46.541 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:34:48.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-r6f87
Feb  8 11:34:59.218: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-r6f87
STEP: checking the pod's current state and verifying that restartCount is present
Feb  8 11:34:59.224: INFO: Initial restart count of pod liveness-http is 0
Feb  8 11:35:25.703: INFO: Restart count of pod e2e-tests-container-probe-r6f87/liveness-http is now 1 (26.478996513s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:35:25.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-r6f87" for this suite.
Feb  8 11:35:33.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:35:34.091: INFO: namespace: e2e-tests-container-probe-r6f87, resource: bindings, ignored listing per whitelist
Feb  8 11:35:34.115: INFO: namespace e2e-tests-container-probe-r6f87 deletion completed in 8.215026334s

• [SLOW TEST:45.290 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:35:34.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  8 11:35:34.408: INFO: Waiting up to 5m0s for pod "pod-1e9df725-4a67-11ea-95d6-0242ac110005" in namespace "e2e-tests-emptydir-jx26h" to be "success or failure"
Feb  8 11:35:34.566: INFO: Pod "pod-1e9df725-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 157.902314ms
Feb  8 11:35:36.641: INFO: Pod "pod-1e9df725-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232390452s
Feb  8 11:35:38.658: INFO: Pod "pod-1e9df725-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.249393261s
Feb  8 11:35:40.675: INFO: Pod "pod-1e9df725-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.266351128s
Feb  8 11:35:42.778: INFO: Pod "pod-1e9df725-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.369778319s
Feb  8 11:35:44.792: INFO: Pod "pod-1e9df725-4a67-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.38383301s
STEP: Saw pod success
Feb  8 11:35:44.792: INFO: Pod "pod-1e9df725-4a67-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 11:35:44.798: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1e9df725-4a67-11ea-95d6-0242ac110005 container test-container: 
STEP: delete the pod
Feb  8 11:35:44.890: INFO: Waiting for pod pod-1e9df725-4a67-11ea-95d6-0242ac110005 to disappear
Feb  8 11:35:44.989: INFO: Pod pod-1e9df725-4a67-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:35:44.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-jx26h" for this suite.
Feb  8 11:35:51.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:35:51.278: INFO: namespace: e2e-tests-emptydir-jx26h, resource: bindings, ignored listing per whitelist
Feb  8 11:35:51.423: INFO: namespace e2e-tests-emptydir-jx26h deletion completed in 6.401115005s

• [SLOW TEST:17.309 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:35:51.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb  8 11:35:51.736: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-d7794,SelfLink:/api/v1/namespaces/e2e-tests-watch-d7794/configmaps/e2e-watch-test-watch-closed,UID:28f1f7a3-4a67-11ea-a994-fa163e34d433,ResourceVersion:20969623,Generation:0,CreationTimestamp:2020-02-08 11:35:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  8 11:35:51.736: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-d7794,SelfLink:/api/v1/namespaces/e2e-tests-watch-d7794/configmaps/e2e-watch-test-watch-closed,UID:28f1f7a3-4a67-11ea-a994-fa163e34d433,ResourceVersion:20969624,Generation:0,CreationTimestamp:2020-02-08 11:35:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb  8 11:35:51.801: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-d7794,SelfLink:/api/v1/namespaces/e2e-tests-watch-d7794/configmaps/e2e-watch-test-watch-closed,UID:28f1f7a3-4a67-11ea-a994-fa163e34d433,ResourceVersion:20969625,Generation:0,CreationTimestamp:2020-02-08 11:35:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  8 11:35:51.801: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-d7794,SelfLink:/api/v1/namespaces/e2e-tests-watch-d7794/configmaps/e2e-watch-test-watch-closed,UID:28f1f7a3-4a67-11ea-a994-fa163e34d433,ResourceVersion:20969626,Generation:0,CreationTimestamp:2020-02-08 11:35:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:35:51.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-d7794" for this suite.
Feb  8 11:35:57.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:35:57.955: INFO: namespace: e2e-tests-watch-d7794, resource: bindings, ignored listing per whitelist
Feb  8 11:35:57.966: INFO: namespace e2e-tests-watch-d7794 deletion completed in 6.156792839s

• [SLOW TEST:6.542 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:35:57.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-r2qzf
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-r2qzf
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-r2qzf
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-r2qzf
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-r2qzf
Feb  8 11:36:12.330: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-r2qzf, name: ss-0, uid: 331aa63d-4a67-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Feb  8 11:36:12.645: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-r2qzf, name: ss-0, uid: 331aa63d-4a67-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb  8 11:36:12.708: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-r2qzf, name: ss-0, uid: 331aa63d-4a67-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb  8 11:36:12.720: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-r2qzf
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-r2qzf
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-r2qzf and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  8 11:36:25.292: INFO: Deleting all statefulset in ns e2e-tests-statefulset-r2qzf
Feb  8 11:36:25.302: INFO: Scaling statefulset ss to 0
Feb  8 11:36:45.356: INFO: Waiting for statefulset status.replicas updated to 0
Feb  8 11:36:45.363: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:36:45.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-r2qzf" for this suite.
Feb  8 11:36:53.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:36:53.682: INFO: namespace: e2e-tests-statefulset-r2qzf, resource: bindings, ignored listing per whitelist
Feb  8 11:36:53.727: INFO: namespace e2e-tests-statefulset-r2qzf deletion completed in 8.271806659s

• [SLOW TEST:55.761 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:36:53.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  8 11:36:54.123: INFO: Waiting up to 5m0s for pod "pod-4e10414a-4a67-11ea-95d6-0242ac110005" in namespace "e2e-tests-emptydir-dmsxc" to be "success or failure"
Feb  8 11:36:54.153: INFO: Pod "pod-4e10414a-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.006539ms
Feb  8 11:36:56.177: INFO: Pod "pod-4e10414a-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054382917s
Feb  8 11:36:58.186: INFO: Pod "pod-4e10414a-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063536319s
Feb  8 11:37:00.354: INFO: Pod "pod-4e10414a-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.23102897s
Feb  8 11:37:02.378: INFO: Pod "pod-4e10414a-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.25500379s
Feb  8 11:37:04.430: INFO: Pod "pod-4e10414a-4a67-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.307473134s
STEP: Saw pod success
Feb  8 11:37:04.430: INFO: Pod "pod-4e10414a-4a67-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 11:37:04.438: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4e10414a-4a67-11ea-95d6-0242ac110005 container test-container: 
STEP: delete the pod
Feb  8 11:37:04.749: INFO: Waiting for pod pod-4e10414a-4a67-11ea-95d6-0242ac110005 to disappear
Feb  8 11:37:04.762: INFO: Pod pod-4e10414a-4a67-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:37:04.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-dmsxc" for this suite.
Feb  8 11:37:10.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:37:10.880: INFO: namespace: e2e-tests-emptydir-dmsxc, resource: bindings, ignored listing per whitelist
Feb  8 11:37:10.952: INFO: namespace e2e-tests-emptydir-dmsxc deletion completed in 6.17912718s

• [SLOW TEST:17.225 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:37:10.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-mttt
STEP: Creating a pod to test atomic-volume-subpath
Feb  8 11:37:11.177: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-mttt" in namespace "e2e-tests-subpath-86dcq" to be "success or failure"
Feb  8 11:37:11.188: INFO: Pod "pod-subpath-test-secret-mttt": Phase="Pending", Reason="", readiness=false. Elapsed: 11.038792ms
Feb  8 11:37:13.202: INFO: Pod "pod-subpath-test-secret-mttt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024563289s
Feb  8 11:37:15.211: INFO: Pod "pod-subpath-test-secret-mttt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033865416s
Feb  8 11:37:17.437: INFO: Pod "pod-subpath-test-secret-mttt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.259675354s
Feb  8 11:37:19.447: INFO: Pod "pod-subpath-test-secret-mttt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.26962863s
Feb  8 11:37:21.522: INFO: Pod "pod-subpath-test-secret-mttt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.345101494s
Feb  8 11:37:23.562: INFO: Pod "pod-subpath-test-secret-mttt": Phase="Pending", Reason="", readiness=false. Elapsed: 12.385131033s
Feb  8 11:37:25.575: INFO: Pod "pod-subpath-test-secret-mttt": Phase="Pending", Reason="", readiness=false. Elapsed: 14.397343439s
Feb  8 11:37:27.594: INFO: Pod "pod-subpath-test-secret-mttt": Phase="Running", Reason="", readiness=false. Elapsed: 16.416853276s
Feb  8 11:37:29.618: INFO: Pod "pod-subpath-test-secret-mttt": Phase="Running", Reason="", readiness=false. Elapsed: 18.440482898s
Feb  8 11:37:31.631: INFO: Pod "pod-subpath-test-secret-mttt": Phase="Running", Reason="", readiness=false. Elapsed: 20.453875285s
Feb  8 11:37:33.653: INFO: Pod "pod-subpath-test-secret-mttt": Phase="Running", Reason="", readiness=false. Elapsed: 22.475960354s
Feb  8 11:37:35.686: INFO: Pod "pod-subpath-test-secret-mttt": Phase="Running", Reason="", readiness=false. Elapsed: 24.508634057s
Feb  8 11:37:37.711: INFO: Pod "pod-subpath-test-secret-mttt": Phase="Running", Reason="", readiness=false. Elapsed: 26.533539407s
Feb  8 11:37:39.726: INFO: Pod "pod-subpath-test-secret-mttt": Phase="Running", Reason="", readiness=false. Elapsed: 28.5482455s
Feb  8 11:37:41.739: INFO: Pod "pod-subpath-test-secret-mttt": Phase="Running", Reason="", readiness=false. Elapsed: 30.562006838s
Feb  8 11:37:43.930: INFO: Pod "pod-subpath-test-secret-mttt": Phase="Running", Reason="", readiness=false. Elapsed: 32.752294188s
Feb  8 11:37:45.945: INFO: Pod "pod-subpath-test-secret-mttt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.76750539s
STEP: Saw pod success
Feb  8 11:37:45.945: INFO: Pod "pod-subpath-test-secret-mttt" satisfied condition "success or failure"
Feb  8 11:37:45.971: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-mttt container test-container-subpath-secret-mttt: 
STEP: delete the pod
Feb  8 11:37:47.288: INFO: Waiting for pod pod-subpath-test-secret-mttt to disappear
Feb  8 11:37:47.915: INFO: Pod pod-subpath-test-secret-mttt no longer exists
STEP: Deleting pod pod-subpath-test-secret-mttt
Feb  8 11:37:47.916: INFO: Deleting pod "pod-subpath-test-secret-mttt" in namespace "e2e-tests-subpath-86dcq"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:37:47.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-86dcq" for this suite.
Feb  8 11:37:56.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:37:56.417: INFO: namespace: e2e-tests-subpath-86dcq, resource: bindings, ignored listing per whitelist
Feb  8 11:37:56.446: INFO: namespace e2e-tests-subpath-86dcq deletion completed in 8.376381787s

• [SLOW TEST:45.493 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:37:56.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb  8 11:37:56.763: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-zrsgz,SelfLink:/api/v1/namespaces/e2e-tests-watch-zrsgz/configmaps/e2e-watch-test-resource-version,UID:736ffac8-4a67-11ea-a994-fa163e34d433,ResourceVersion:20970006,Generation:0,CreationTimestamp:2020-02-08 11:37:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  8 11:37:56.763: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-zrsgz,SelfLink:/api/v1/namespaces/e2e-tests-watch-zrsgz/configmaps/e2e-watch-test-resource-version,UID:736ffac8-4a67-11ea-a994-fa163e34d433,ResourceVersion:20970007,Generation:0,CreationTimestamp:2020-02-08 11:37:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:37:56.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-zrsgz" for this suite.
Feb  8 11:38:02.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:38:03.008: INFO: namespace: e2e-tests-watch-zrsgz, resource: bindings, ignored listing per whitelist
Feb  8 11:38:03.064: INFO: namespace e2e-tests-watch-zrsgz deletion completed in 6.293401812s

• [SLOW TEST:6.618 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:38:03.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-55mz9.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-55mz9.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-55mz9.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-55mz9.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-55mz9.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-55mz9.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  8 11:38:19.433: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-55mz9/dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005)
Feb  8 11:38:19.450: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-55mz9/dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005)
Feb  8 11:38:19.459: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-55mz9.svc.cluster.local from pod e2e-tests-dns-55mz9/dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005)
Feb  8 11:38:19.468: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-55mz9/dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005)
Feb  8 11:38:19.472: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-55mz9/dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005)
Feb  8 11:38:19.477: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-55mz9/dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005)
Feb  8 11:38:19.482: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-55mz9/dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005)
Feb  8 11:38:19.486: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-55mz9/dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005)
Feb  8 11:38:19.490: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-55mz9/dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005)
Feb  8 11:38:19.494: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-55mz9/dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005)
Feb  8 11:38:19.498: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-55mz9/dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005)
Feb  8 11:38:19.506: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-55mz9/dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005)
Feb  8 11:38:19.511: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-55mz9.svc.cluster.local from pod e2e-tests-dns-55mz9/dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005)
Feb  8 11:38:19.519: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-55mz9/dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005)
Feb  8 11:38:19.524: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-55mz9/dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005)
Feb  8 11:38:19.533: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-55mz9/dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005)
Feb  8 11:38:19.533: INFO: Lookups using e2e-tests-dns-55mz9/dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-55mz9.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-55mz9.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb  8 11:38:24.701: INFO: DNS probes using e2e-tests-dns-55mz9/dns-test-7748c6ce-4a67-11ea-95d6-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:38:24.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-55mz9" for this suite.
Feb  8 11:38:33.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:38:33.076: INFO: namespace: e2e-tests-dns-55mz9, resource: bindings, ignored listing per whitelist
Feb  8 11:38:33.191: INFO: namespace e2e-tests-dns-55mz9 deletion completed in 8.356819148s

• [SLOW TEST:30.127 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:38:33.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  8 11:38:44.042: INFO: Successfully updated pod "annotationupdate894db25e-4a67-11ea-95d6-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:38:46.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-f46fn" for this suite.
Feb  8 11:39:08.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:39:08.376: INFO: namespace: e2e-tests-projected-f46fn, resource: bindings, ignored listing per whitelist
Feb  8 11:39:08.415: INFO: namespace e2e-tests-projected-f46fn deletion completed in 22.234894701s

• [SLOW TEST:35.223 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:39:08.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  8 11:39:08.730: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e5ab07e-4a67-11ea-95d6-0242ac110005" in namespace "e2e-tests-downward-api-rzh22" to be "success or failure"
Feb  8 11:39:08.848: INFO: Pod "downwardapi-volume-9e5ab07e-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 118.406667ms
Feb  8 11:39:10.868: INFO: Pod "downwardapi-volume-9e5ab07e-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138461241s
Feb  8 11:39:12.896: INFO: Pod "downwardapi-volume-9e5ab07e-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165638208s
Feb  8 11:39:14.934: INFO: Pod "downwardapi-volume-9e5ab07e-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204048285s
Feb  8 11:39:16.978: INFO: Pod "downwardapi-volume-9e5ab07e-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.247530021s
Feb  8 11:39:18.994: INFO: Pod "downwardapi-volume-9e5ab07e-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.264404976s
Feb  8 11:39:21.011: INFO: Pod "downwardapi-volume-9e5ab07e-4a67-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.281438628s
STEP: Saw pod success
Feb  8 11:39:21.012: INFO: Pod "downwardapi-volume-9e5ab07e-4a67-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 11:39:21.018: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9e5ab07e-4a67-11ea-95d6-0242ac110005 container client-container: 
STEP: delete the pod
Feb  8 11:39:21.131: INFO: Waiting for pod downwardapi-volume-9e5ab07e-4a67-11ea-95d6-0242ac110005 to disappear
Feb  8 11:39:21.143: INFO: Pod downwardapi-volume-9e5ab07e-4a67-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:39:21.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rzh22" for this suite.
Feb  8 11:39:27.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:39:27.517: INFO: namespace: e2e-tests-downward-api-rzh22, resource: bindings, ignored listing per whitelist
Feb  8 11:39:27.679: INFO: namespace e2e-tests-downward-api-rzh22 deletion completed in 6.458517696s

• [SLOW TEST:19.264 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:39:27.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  8 11:39:27.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-vstm2'
Feb  8 11:39:30.204: INFO: stderr: ""
Feb  8 11:39:30.204: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb  8 11:39:40.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-vstm2 -o json'
Feb  8 11:39:40.433: INFO: stderr: ""
Feb  8 11:39:40.433: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-08T11:39:30Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-vstm2\",\n        \"resourceVersion\": \"20970228\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-vstm2/pods/e2e-test-nginx-pod\",\n        \"uid\": \"ab25be62-4a67-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-ksbc6\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-ksbc6\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-ksbc6\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-08T11:39:30Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-08T11:39:38Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-08T11:39:38Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-08T11:39:30Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://70c06049f62c3cd0fce8e92c945bcc905f4917bd225694724cc9e910ebec0ab1\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-08T11:39:37Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-08T11:39:30Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb  8 11:39:40.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-vstm2'
Feb  8 11:39:40.820: INFO: stderr: ""
Feb  8 11:39:40.820: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Feb  8 11:39:40.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-vstm2'
Feb  8 11:39:49.703: INFO: stderr: ""
Feb  8 11:39:49.703: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:39:49.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vstm2" for this suite.
Feb  8 11:39:55.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:39:55.857: INFO: namespace: e2e-tests-kubectl-vstm2, resource: bindings, ignored listing per whitelist
Feb  8 11:39:56.096: INFO: namespace e2e-tests-kubectl-vstm2 deletion completed in 6.31397068s

• [SLOW TEST:28.417 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:39:56.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb  8 11:39:56.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-q25p9'
Feb  8 11:39:56.563: INFO: stderr: ""
Feb  8 11:39:56.563: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  8 11:39:56.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-q25p9'
Feb  8 11:39:56.779: INFO: stderr: ""
Feb  8 11:39:56.779: INFO: stdout: "update-demo-nautilus-cr2sp update-demo-nautilus-djr5z "
Feb  8 11:39:56.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cr2sp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q25p9'
Feb  8 11:39:57.032: INFO: stderr: ""
Feb  8 11:39:57.032: INFO: stdout: ""
Feb  8 11:39:57.033: INFO: update-demo-nautilus-cr2sp is created but not running
Feb  8 11:40:02.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-q25p9'
Feb  8 11:40:02.215: INFO: stderr: ""
Feb  8 11:40:02.215: INFO: stdout: "update-demo-nautilus-cr2sp update-demo-nautilus-djr5z "
Feb  8 11:40:02.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cr2sp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q25p9'
Feb  8 11:40:02.383: INFO: stderr: ""
Feb  8 11:40:02.383: INFO: stdout: ""
Feb  8 11:40:02.383: INFO: update-demo-nautilus-cr2sp is created but not running
Feb  8 11:40:07.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-q25p9'
Feb  8 11:40:07.637: INFO: stderr: ""
Feb  8 11:40:07.637: INFO: stdout: "update-demo-nautilus-cr2sp update-demo-nautilus-djr5z "
Feb  8 11:40:07.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cr2sp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q25p9'
Feb  8 11:40:07.786: INFO: stderr: ""
Feb  8 11:40:07.786: INFO: stdout: ""
Feb  8 11:40:07.786: INFO: update-demo-nautilus-cr2sp is created but not running
Feb  8 11:40:12.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-q25p9'
Feb  8 11:40:12.998: INFO: stderr: ""
Feb  8 11:40:12.998: INFO: stdout: "update-demo-nautilus-cr2sp update-demo-nautilus-djr5z "
Feb  8 11:40:12.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cr2sp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q25p9'
Feb  8 11:40:13.139: INFO: stderr: ""
Feb  8 11:40:13.139: INFO: stdout: "true"
Feb  8 11:40:13.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cr2sp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q25p9'
Feb  8 11:40:13.265: INFO: stderr: ""
Feb  8 11:40:13.265: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  8 11:40:13.265: INFO: validating pod update-demo-nautilus-cr2sp
Feb  8 11:40:13.278: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  8 11:40:13.278: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  8 11:40:13.278: INFO: update-demo-nautilus-cr2sp is verified up and running
Feb  8 11:40:13.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-djr5z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q25p9'
Feb  8 11:40:13.391: INFO: stderr: ""
Feb  8 11:40:13.391: INFO: stdout: "true"
Feb  8 11:40:13.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-djr5z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q25p9'
Feb  8 11:40:13.496: INFO: stderr: ""
Feb  8 11:40:13.496: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  8 11:40:13.496: INFO: validating pod update-demo-nautilus-djr5z
Feb  8 11:40:13.505: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  8 11:40:13.505: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  8 11:40:13.505: INFO: update-demo-nautilus-djr5z is verified up and running
STEP: using delete to clean up resources
Feb  8 11:40:13.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-q25p9'
Feb  8 11:40:13.630: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  8 11:40:13.631: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  8 11:40:13.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-q25p9'
Feb  8 11:40:13.902: INFO: stderr: "No resources found.\n"
Feb  8 11:40:13.902: INFO: stdout: ""
Feb  8 11:40:13.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-q25p9 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  8 11:40:14.166: INFO: stderr: ""
Feb  8 11:40:14.166: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:40:14.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-q25p9" for this suite.
Feb  8 11:40:38.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:40:38.428: INFO: namespace: e2e-tests-kubectl-q25p9, resource: bindings, ignored listing per whitelist
Feb  8 11:40:38.629: INFO: namespace e2e-tests-kubectl-q25p9 deletion completed in 24.437686548s

• [SLOW TEST:42.533 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:40:38.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb  8 11:40:38.890: INFO: Waiting up to 5m0s for pod "pod-d40dda1f-4a67-11ea-95d6-0242ac110005" in namespace "e2e-tests-emptydir-jdhkt" to be "success or failure"
Feb  8 11:40:38.903: INFO: Pod "pod-d40dda1f-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.254657ms
Feb  8 11:40:40.946: INFO: Pod "pod-d40dda1f-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05530845s
Feb  8 11:40:42.963: INFO: Pod "pod-d40dda1f-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073038384s
Feb  8 11:40:45.104: INFO: Pod "pod-d40dda1f-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.213258321s
Feb  8 11:40:47.116: INFO: Pod "pod-d40dda1f-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.22549262s
Feb  8 11:40:49.136: INFO: Pod "pod-d40dda1f-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.245805841s
Feb  8 11:40:51.150: INFO: Pod "pod-d40dda1f-4a67-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.259501359s
STEP: Saw pod success
Feb  8 11:40:51.150: INFO: Pod "pod-d40dda1f-4a67-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 11:40:51.155: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d40dda1f-4a67-11ea-95d6-0242ac110005 container test-container: 
STEP: delete the pod
Feb  8 11:40:51.297: INFO: Waiting for pod pod-d40dda1f-4a67-11ea-95d6-0242ac110005 to disappear
Feb  8 11:40:51.316: INFO: Pod pod-d40dda1f-4a67-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:40:51.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-jdhkt" for this suite.
Feb  8 11:40:57.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:40:57.450: INFO: namespace: e2e-tests-emptydir-jdhkt, resource: bindings, ignored listing per whitelist
Feb  8 11:40:57.534: INFO: namespace e2e-tests-emptydir-jdhkt deletion completed in 6.207288253s

• [SLOW TEST:18.905 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:40:57.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  8 11:40:57.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-6gkvn'
Feb  8 11:40:57.951: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  8 11:40:57.952: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Feb  8 11:41:00.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-6gkvn'
Feb  8 11:41:00.594: INFO: stderr: ""
Feb  8 11:41:00.594: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:41:00.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6gkvn" for this suite.
Feb  8 11:41:06.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:41:06.886: INFO: namespace: e2e-tests-kubectl-6gkvn, resource: bindings, ignored listing per whitelist
Feb  8 11:41:06.968: INFO: namespace e2e-tests-kubectl-6gkvn deletion completed in 6.245773236s

• [SLOW TEST:9.433 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:41:06.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-e4fcad81-4a67-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  8 11:41:07.239: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e4fde40a-4a67-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-m5djv" to be "success or failure"
Feb  8 11:41:07.381: INFO: Pod "pod-projected-configmaps-e4fde40a-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 142.533665ms
Feb  8 11:41:09.400: INFO: Pod "pod-projected-configmaps-e4fde40a-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16095261s
Feb  8 11:41:11.420: INFO: Pod "pod-projected-configmaps-e4fde40a-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181008517s
Feb  8 11:41:13.484: INFO: Pod "pod-projected-configmaps-e4fde40a-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.245136844s
Feb  8 11:41:15.493: INFO: Pod "pod-projected-configmaps-e4fde40a-4a67-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.254118973s
Feb  8 11:41:17.507: INFO: Pod "pod-projected-configmaps-e4fde40a-4a67-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.267970082s
STEP: Saw pod success
Feb  8 11:41:17.507: INFO: Pod "pod-projected-configmaps-e4fde40a-4a67-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 11:41:17.512: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-e4fde40a-4a67-11ea-95d6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  8 11:41:18.225: INFO: Waiting for pod pod-projected-configmaps-e4fde40a-4a67-11ea-95d6-0242ac110005 to disappear
Feb  8 11:41:18.251: INFO: Pod pod-projected-configmaps-e4fde40a-4a67-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:41:18.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-m5djv" for this suite.
Feb  8 11:41:24.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:41:24.603: INFO: namespace: e2e-tests-projected-m5djv, resource: bindings, ignored listing per whitelist
Feb  8 11:41:24.622: INFO: namespace e2e-tests-projected-m5djv deletion completed in 6.364736242s

• [SLOW TEST:17.654 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:41:24.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Feb  8 11:41:24.770: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb  8 11:41:24.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-v4c8g'
Feb  8 11:41:25.645: INFO: stderr: ""
Feb  8 11:41:25.645: INFO: stdout: "service/redis-slave created\n"
Feb  8 11:41:25.646: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb  8 11:41:25.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-v4c8g'
Feb  8 11:41:26.138: INFO: stderr: ""
Feb  8 11:41:26.138: INFO: stdout: "service/redis-master created\n"
Feb  8 11:41:26.139: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb  8 11:41:26.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-v4c8g'
Feb  8 11:41:26.762: INFO: stderr: ""
Feb  8 11:41:26.762: INFO: stdout: "service/frontend created\n"
Feb  8 11:41:26.763: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb  8 11:41:26.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-v4c8g'
Feb  8 11:41:27.211: INFO: stderr: ""
Feb  8 11:41:27.211: INFO: stdout: "deployment.extensions/frontend created\n"
Feb  8 11:41:27.212: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  8 11:41:27.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-v4c8g'
Feb  8 11:41:27.816: INFO: stderr: ""
Feb  8 11:41:27.816: INFO: stdout: "deployment.extensions/redis-master created\n"
Feb  8 11:41:27.818: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb  8 11:41:27.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-v4c8g'
Feb  8 11:41:28.251: INFO: stderr: ""
Feb  8 11:41:28.251: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Feb  8 11:41:28.251: INFO: Waiting for all frontend pods to be Running.
Feb  8 11:41:58.303: INFO: Waiting for frontend to serve content.
Feb  8 11:41:58.417: INFO: Trying to add a new entry to the guestbook.
Feb  8 11:41:58.485: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb  8 11:41:58.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-v4c8g'
Feb  8 11:41:58.979: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  8 11:41:58.979: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb  8 11:41:58.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-v4c8g'
Feb  8 11:41:59.368: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  8 11:41:59.368: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  8 11:41:59.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-v4c8g'
Feb  8 11:41:59.649: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  8 11:41:59.649: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  8 11:41:59.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-v4c8g'
Feb  8 11:41:59.775: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  8 11:41:59.776: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  8 11:41:59.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-v4c8g'
Feb  8 11:42:00.281: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  8 11:42:00.281: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  8 11:42:00.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-v4c8g'
Feb  8 11:42:00.579: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  8 11:42:00.579: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:42:00.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-v4c8g" for this suite.
Feb  8 11:42:44.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:42:44.832: INFO: namespace: e2e-tests-kubectl-v4c8g, resource: bindings, ignored listing per whitelist
Feb  8 11:42:44.896: INFO: namespace e2e-tests-kubectl-v4c8g deletion completed in 44.21198084s

• [SLOW TEST:80.274 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:42:44.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-jp65h
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  8 11:42:45.136: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  8 11:43:15.560: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-jp65h PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 11:43:15.560: INFO: >>> kubeConfig: /root/.kube/config
I0208 11:43:15.630096       8 log.go:172] (0xc000aa44d0) (0xc001b77900) Create stream
I0208 11:43:15.630164       8 log.go:172] (0xc000aa44d0) (0xc001b77900) Stream added, broadcasting: 1
I0208 11:43:15.637181       8 log.go:172] (0xc000aa44d0) Reply frame received for 1
I0208 11:43:15.637224       8 log.go:172] (0xc000aa44d0) (0xc001e8c320) Create stream
I0208 11:43:15.637240       8 log.go:172] (0xc000aa44d0) (0xc001e8c320) Stream added, broadcasting: 3
I0208 11:43:15.638671       8 log.go:172] (0xc000aa44d0) Reply frame received for 3
I0208 11:43:15.638714       8 log.go:172] (0xc000aa44d0) (0xc001796c80) Create stream
I0208 11:43:15.638730       8 log.go:172] (0xc000aa44d0) (0xc001796c80) Stream added, broadcasting: 5
I0208 11:43:15.640483       8 log.go:172] (0xc000aa44d0) Reply frame received for 5
I0208 11:43:15.786064       8 log.go:172] (0xc000aa44d0) Data frame received for 3
I0208 11:43:15.786275       8 log.go:172] (0xc001e8c320) (3) Data frame handling
I0208 11:43:15.786310       8 log.go:172] (0xc001e8c320) (3) Data frame sent
I0208 11:43:15.948759       8 log.go:172] (0xc000aa44d0) Data frame received for 1
I0208 11:43:15.948838       8 log.go:172] (0xc001b77900) (1) Data frame handling
I0208 11:43:15.948874       8 log.go:172] (0xc001b77900) (1) Data frame sent
I0208 11:43:15.982751       8 log.go:172] (0xc000aa44d0) (0xc001b77900) Stream removed, broadcasting: 1
I0208 11:43:15.983397       8 log.go:172] (0xc000aa44d0) (0xc001e8c320) Stream removed, broadcasting: 3
I0208 11:43:15.983463       8 log.go:172] (0xc000aa44d0) (0xc001796c80) Stream removed, broadcasting: 5
I0208 11:43:15.983494       8 log.go:172] (0xc000aa44d0) Go away received
I0208 11:43:15.984279       8 log.go:172] (0xc000aa44d0) (0xc001b77900) Stream removed, broadcasting: 1
I0208 11:43:15.984357       8 log.go:172] (0xc000aa44d0) (0xc001e8c320) Stream removed, broadcasting: 3
I0208 11:43:15.984394       8 log.go:172] (0xc000aa44d0) (0xc001796c80) Stream removed, broadcasting: 5
Feb  8 11:43:15.984: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:43:15.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-jp65h" for this suite.
Feb  8 11:43:40.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:43:40.379: INFO: namespace: e2e-tests-pod-network-test-jp65h, resource: bindings, ignored listing per whitelist
Feb  8 11:43:40.388: INFO: namespace e2e-tests-pod-network-test-jp65h deletion completed in 24.371484218s

• [SLOW TEST:55.492 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:43:40.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-l6lzk
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Feb  8 11:43:40.818: INFO: Found 0 stateful pods, waiting for 3
Feb  8 11:43:51.830: INFO: Found 2 stateful pods, waiting for 3
Feb  8 11:44:01.196: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 11:44:01.196: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 11:44:01.196: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  8 11:44:10.926: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 11:44:10.927: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 11:44:10.927: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 11:44:10.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l6lzk ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  8 11:44:11.530: INFO: stderr: "I0208 11:44:11.119247    2539 log.go:172] (0xc0006fc370) (0xc000661540) Create stream\nI0208 11:44:11.119515    2539 log.go:172] (0xc0006fc370) (0xc000661540) Stream added, broadcasting: 1\nI0208 11:44:11.126106    2539 log.go:172] (0xc0006fc370) Reply frame received for 1\nI0208 11:44:11.126136    2539 log.go:172] (0xc0006fc370) (0xc0006615e0) Create stream\nI0208 11:44:11.126144    2539 log.go:172] (0xc0006fc370) (0xc0006615e0) Stream added, broadcasting: 3\nI0208 11:44:11.128789    2539 log.go:172] (0xc0006fc370) Reply frame received for 3\nI0208 11:44:11.128831    2539 log.go:172] (0xc0006fc370) (0xc0004ce5a0) Create stream\nI0208 11:44:11.128850    2539 log.go:172] (0xc0006fc370) (0xc0004ce5a0) Stream added, broadcasting: 5\nI0208 11:44:11.129880    2539 log.go:172] (0xc0006fc370) Reply frame received for 5\nI0208 11:44:11.351514    2539 log.go:172] (0xc0006fc370) Data frame received for 3\nI0208 11:44:11.351668    2539 log.go:172] (0xc0006615e0) (3) Data frame handling\nI0208 11:44:11.351727    2539 log.go:172] (0xc0006615e0) (3) Data frame sent\nI0208 11:44:11.514813    2539 log.go:172] (0xc0006fc370) (0xc0006615e0) Stream removed, broadcasting: 3\nI0208 11:44:11.515372    2539 log.go:172] (0xc0006fc370) Data frame received for 1\nI0208 11:44:11.515728    2539 log.go:172] (0xc0006fc370) (0xc0004ce5a0) Stream removed, broadcasting: 5\nI0208 11:44:11.515862    2539 log.go:172] (0xc000661540) (1) Data frame handling\nI0208 11:44:11.515919    2539 log.go:172] (0xc000661540) (1) Data frame sent\nI0208 11:44:11.516044    2539 log.go:172] (0xc0006fc370) (0xc000661540) Stream removed, broadcasting: 1\nI0208 11:44:11.516119    2539 log.go:172] (0xc0006fc370) Go away received\nI0208 11:44:11.516926    2539 log.go:172] (0xc0006fc370) (0xc000661540) Stream removed, broadcasting: 1\nI0208 11:44:11.517044    2539 log.go:172] (0xc0006fc370) (0xc0006615e0) Stream removed, broadcasting: 3\nI0208 11:44:11.517132    2539 log.go:172] (0xc0006fc370) (0xc0004ce5a0) Stream removed, broadcasting: 5\n"
Feb  8 11:44:11.530: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  8 11:44:11.530: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  8 11:44:21.604: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb  8 11:44:31.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l6lzk ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  8 11:44:32.371: INFO: stderr: "I0208 11:44:31.910725    2561 log.go:172] (0xc000720370) (0xc0005d3360) Create stream\nI0208 11:44:31.911082    2561 log.go:172] (0xc000720370) (0xc0005d3360) Stream added, broadcasting: 1\nI0208 11:44:31.916810    2561 log.go:172] (0xc000720370) Reply frame received for 1\nI0208 11:44:31.916858    2561 log.go:172] (0xc000720370) (0xc000380000) Create stream\nI0208 11:44:31.916868    2561 log.go:172] (0xc000720370) (0xc000380000) Stream added, broadcasting: 3\nI0208 11:44:31.917995    2561 log.go:172] (0xc000720370) Reply frame received for 3\nI0208 11:44:31.918015    2561 log.go:172] (0xc000720370) (0xc0005d3400) Create stream\nI0208 11:44:31.918019    2561 log.go:172] (0xc000720370) (0xc0005d3400) Stream added, broadcasting: 5\nI0208 11:44:31.918713    2561 log.go:172] (0xc000720370) Reply frame received for 5\nI0208 11:44:32.178271    2561 log.go:172] (0xc000720370) Data frame received for 3\nI0208 11:44:32.178488    2561 log.go:172] (0xc000380000) (3) Data frame handling\nI0208 11:44:32.178575    2561 log.go:172] (0xc000380000) (3) Data frame sent\nI0208 11:44:32.351179    2561 log.go:172] (0xc000720370) (0xc000380000) Stream removed, broadcasting: 3\nI0208 11:44:32.351693    2561 log.go:172] (0xc000720370) Data frame received for 1\nI0208 11:44:32.351969    2561 log.go:172] (0xc000720370) (0xc0005d3400) Stream removed, broadcasting: 5\nI0208 11:44:32.352126    2561 log.go:172] (0xc0005d3360) (1) Data frame handling\nI0208 11:44:32.352177    2561 log.go:172] (0xc0005d3360) (1) Data frame sent\nI0208 11:44:32.352193    2561 log.go:172] (0xc000720370) (0xc0005d3360) Stream removed, broadcasting: 1\nI0208 11:44:32.352231    2561 log.go:172] (0xc000720370) Go away received\nI0208 11:44:32.353382    2561 log.go:172] (0xc000720370) (0xc0005d3360) Stream removed, broadcasting: 1\nI0208 11:44:32.353445    2561 log.go:172] (0xc000720370) (0xc000380000) Stream removed, broadcasting: 3\nI0208 11:44:32.353461    2561 log.go:172] (0xc000720370) (0xc0005d3400) Stream removed, broadcasting: 5\n"
Feb  8 11:44:32.371: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  8 11:44:32.371: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  8 11:44:42.478: INFO: Waiting for StatefulSet e2e-tests-statefulset-l6lzk/ss2 to complete update
Feb  8 11:44:42.478: INFO: Waiting for Pod e2e-tests-statefulset-l6lzk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 11:44:42.478: INFO: Waiting for Pod e2e-tests-statefulset-l6lzk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 11:44:52.560: INFO: Waiting for StatefulSet e2e-tests-statefulset-l6lzk/ss2 to complete update
Feb  8 11:44:52.561: INFO: Waiting for Pod e2e-tests-statefulset-l6lzk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 11:44:52.561: INFO: Waiting for Pod e2e-tests-statefulset-l6lzk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 11:45:02.584: INFO: Waiting for StatefulSet e2e-tests-statefulset-l6lzk/ss2 to complete update
Feb  8 11:45:02.584: INFO: Waiting for Pod e2e-tests-statefulset-l6lzk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 11:45:12.727: INFO: Waiting for StatefulSet e2e-tests-statefulset-l6lzk/ss2 to complete update
STEP: Rolling back to a previous revision
Feb  8 11:45:22.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l6lzk ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  8 11:45:23.299: INFO: stderr: "I0208 11:45:22.736677    2584 log.go:172] (0xc00070a370) (0xc0007b6640) Create stream\nI0208 11:45:22.736980    2584 log.go:172] (0xc00070a370) (0xc0007b6640) Stream added, broadcasting: 1\nI0208 11:45:22.747579    2584 log.go:172] (0xc00070a370) Reply frame received for 1\nI0208 11:45:22.747632    2584 log.go:172] (0xc00070a370) (0xc00064cc80) Create stream\nI0208 11:45:22.747641    2584 log.go:172] (0xc00070a370) (0xc00064cc80) Stream added, broadcasting: 3\nI0208 11:45:22.748776    2584 log.go:172] (0xc00070a370) Reply frame received for 3\nI0208 11:45:22.748802    2584 log.go:172] (0xc00070a370) (0xc000394000) Create stream\nI0208 11:45:22.748813    2584 log.go:172] (0xc00070a370) (0xc000394000) Stream added, broadcasting: 5\nI0208 11:45:22.750111    2584 log.go:172] (0xc00070a370) Reply frame received for 5\nI0208 11:45:23.111600    2584 log.go:172] (0xc00070a370) Data frame received for 3\nI0208 11:45:23.111649    2584 log.go:172] (0xc00064cc80) (3) Data frame handling\nI0208 11:45:23.111669    2584 log.go:172] (0xc00064cc80) (3) Data frame sent\nI0208 11:45:23.283793    2584 log.go:172] (0xc00070a370) (0xc00064cc80) Stream removed, broadcasting: 3\nI0208 11:45:23.284398    2584 log.go:172] (0xc00070a370) Data frame received for 1\nI0208 11:45:23.284798    2584 log.go:172] (0xc00070a370) (0xc000394000) Stream removed, broadcasting: 5\nI0208 11:45:23.284999    2584 log.go:172] (0xc0007b6640) (1) Data frame handling\nI0208 11:45:23.285049    2584 log.go:172] (0xc0007b6640) (1) Data frame sent\nI0208 11:45:23.285067    2584 log.go:172] (0xc00070a370) (0xc0007b6640) Stream removed, broadcasting: 1\nI0208 11:45:23.285098    2584 log.go:172] (0xc00070a370) Go away received\nI0208 11:45:23.286179    2584 log.go:172] (0xc00070a370) (0xc0007b6640) Stream removed, broadcasting: 1\nI0208 11:45:23.286233    2584 log.go:172] (0xc00070a370) (0xc00064cc80) Stream removed, broadcasting: 3\nI0208 11:45:23.286245    2584 log.go:172] (0xc00070a370) (0xc000394000) Stream removed, broadcasting: 5\n"
Feb  8 11:45:23.299: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  8 11:45:23.299: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  8 11:45:33.386: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb  8 11:45:43.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l6lzk ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  8 11:45:44.269: INFO: stderr: "I0208 11:45:43.767258    2606 log.go:172] (0xc000704370) (0xc0007c4640) Create stream\nI0208 11:45:43.767866    2606 log.go:172] (0xc000704370) (0xc0007c4640) Stream added, broadcasting: 1\nI0208 11:45:43.777526    2606 log.go:172] (0xc000704370) Reply frame received for 1\nI0208 11:45:43.777580    2606 log.go:172] (0xc000704370) (0xc0007c46e0) Create stream\nI0208 11:45:43.777590    2606 log.go:172] (0xc000704370) (0xc0007c46e0) Stream added, broadcasting: 3\nI0208 11:45:43.779564    2606 log.go:172] (0xc000704370) Reply frame received for 3\nI0208 11:45:43.779595    2606 log.go:172] (0xc000704370) (0xc0005cef00) Create stream\nI0208 11:45:43.779606    2606 log.go:172] (0xc000704370) (0xc0005cef00) Stream added, broadcasting: 5\nI0208 11:45:43.780987    2606 log.go:172] (0xc000704370) Reply frame received for 5\nI0208 11:45:44.005274    2606 log.go:172] (0xc000704370) Data frame received for 3\nI0208 11:45:44.005738    2606 log.go:172] (0xc0007c46e0) (3) Data frame handling\nI0208 11:45:44.005926    2606 log.go:172] (0xc0007c46e0) (3) Data frame sent\nI0208 11:45:44.250386    2606 log.go:172] (0xc000704370) Data frame received for 1\nI0208 11:45:44.250708    2606 log.go:172] (0xc000704370) (0xc0007c46e0) Stream removed, broadcasting: 3\nI0208 11:45:44.250827    2606 log.go:172] (0xc0007c4640) (1) Data frame handling\nI0208 11:45:44.250901    2606 log.go:172] (0xc0007c4640) (1) Data frame sent\nI0208 11:45:44.250927    2606 log.go:172] (0xc000704370) (0xc0005cef00) Stream removed, broadcasting: 5\nI0208 11:45:44.251066    2606 log.go:172] (0xc000704370) (0xc0007c4640) Stream removed, broadcasting: 1\nI0208 11:45:44.251179    2606 log.go:172] (0xc000704370) Go away received\nI0208 11:45:44.251868    2606 log.go:172] (0xc000704370) (0xc0007c4640) Stream removed, broadcasting: 1\nI0208 11:45:44.251894    2606 log.go:172] (0xc000704370) (0xc0007c46e0) Stream removed, broadcasting: 3\nI0208 11:45:44.251910    2606 log.go:172] (0xc000704370) (0xc0005cef00) Stream removed, broadcasting: 5\n"
Feb  8 11:45:44.270: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  8 11:45:44.270: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  8 11:45:44.960: INFO: Waiting for StatefulSet e2e-tests-statefulset-l6lzk/ss2 to complete update
Feb  8 11:45:44.960: INFO: Waiting for Pod e2e-tests-statefulset-l6lzk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  8 11:45:44.960: INFO: Waiting for Pod e2e-tests-statefulset-l6lzk/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  8 11:45:44.960: INFO: Waiting for Pod e2e-tests-statefulset-l6lzk/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  8 11:45:54.994: INFO: Waiting for StatefulSet e2e-tests-statefulset-l6lzk/ss2 to complete update
Feb  8 11:45:54.994: INFO: Waiting for Pod e2e-tests-statefulset-l6lzk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  8 11:45:54.994: INFO: Waiting for Pod e2e-tests-statefulset-l6lzk/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  8 11:46:05.187: INFO: Waiting for StatefulSet e2e-tests-statefulset-l6lzk/ss2 to complete update
Feb  8 11:46:05.187: INFO: Waiting for Pod e2e-tests-statefulset-l6lzk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  8 11:46:05.187: INFO: Waiting for Pod e2e-tests-statefulset-l6lzk/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  8 11:46:15.227: INFO: Waiting for StatefulSet e2e-tests-statefulset-l6lzk/ss2 to complete update
Feb  8 11:46:15.227: INFO: Waiting for Pod e2e-tests-statefulset-l6lzk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  8 11:46:25.908: INFO: Waiting for StatefulSet e2e-tests-statefulset-l6lzk/ss2 to complete update
Feb  8 11:46:25.908: INFO: Waiting for Pod e2e-tests-statefulset-l6lzk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  8 11:46:35.378: INFO: Waiting for StatefulSet e2e-tests-statefulset-l6lzk/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  8 11:46:44.982: INFO: Deleting all statefulset in ns e2e-tests-statefulset-l6lzk
Feb  8 11:46:44.989: INFO: Scaling statefulset ss2 to 0
Feb  8 11:47:25.035: INFO: Waiting for statefulset status.replicas updated to 0
Feb  8 11:47:25.049: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:47:25.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-l6lzk" for this suite.
Feb  8 11:47:33.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:47:33.325: INFO: namespace: e2e-tests-statefulset-l6lzk, resource: bindings, ignored listing per whitelist
Feb  8 11:47:33.659: INFO: namespace e2e-tests-statefulset-l6lzk deletion completed in 8.525257574s

• [SLOW TEST:233.271 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:47:33.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-cb806a5d-4a68-11ea-95d6-0242ac110005
STEP: Creating secret with name s-test-opt-upd-cb806c57-4a68-11ea-95d6-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-cb806a5d-4a68-11ea-95d6-0242ac110005
STEP: Updating secret s-test-opt-upd-cb806c57-4a68-11ea-95d6-0242ac110005
STEP: Creating secret with name s-test-opt-create-cb806c93-4a68-11ea-95d6-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:49:07.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cvzb5" for this suite.
Feb  8 11:49:31.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:49:31.262: INFO: namespace: e2e-tests-projected-cvzb5, resource: bindings, ignored listing per whitelist
Feb  8 11:49:31.313: INFO: namespace e2e-tests-projected-cvzb5 deletion completed in 24.201425584s

• [SLOW TEST:117.652 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:49:31.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Feb  8 11:49:32.081: INFO: created pod pod-service-account-defaultsa
Feb  8 11:49:32.081: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb  8 11:49:32.104: INFO: created pod pod-service-account-mountsa
Feb  8 11:49:32.104: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb  8 11:49:32.149: INFO: created pod pod-service-account-nomountsa
Feb  8 11:49:32.149: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb  8 11:49:32.341: INFO: created pod pod-service-account-defaultsa-mountspec
Feb  8 11:49:32.341: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb  8 11:49:32.397: INFO: created pod pod-service-account-mountsa-mountspec
Feb  8 11:49:32.397: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb  8 11:49:32.728: INFO: created pod pod-service-account-nomountsa-mountspec
Feb  8 11:49:32.728: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb  8 11:49:32.828: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb  8 11:49:32.828: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb  8 11:49:32.846: INFO: created pod pod-service-account-mountsa-nomountspec
Feb  8 11:49:32.846: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb  8 11:49:32.888: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb  8 11:49:32.888: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:49:32.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-k97bh" for this suite.
Feb  8 11:50:03.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:50:04.016: INFO: namespace: e2e-tests-svcaccounts-k97bh, resource: bindings, ignored listing per whitelist
Feb  8 11:50:04.134: INFO: namespace e2e-tests-svcaccounts-k97bh deletion completed in 31.225446707s

• [SLOW TEST:32.821 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:50:04.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  8 11:50:04.424: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25297096-4a69-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-mdl4w" to be "success or failure"
Feb  8 11:50:04.501: INFO: Pod "downwardapi-volume-25297096-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 76.986998ms
Feb  8 11:50:06.541: INFO: Pod "downwardapi-volume-25297096-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116807184s
Feb  8 11:50:08.573: INFO: Pod "downwardapi-volume-25297096-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149338198s
Feb  8 11:50:10.956: INFO: Pod "downwardapi-volume-25297096-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.532265187s
Feb  8 11:50:12.971: INFO: Pod "downwardapi-volume-25297096-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547584961s
Feb  8 11:50:14.981: INFO: Pod "downwardapi-volume-25297096-4a69-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.557603431s
STEP: Saw pod success
Feb  8 11:50:14.981: INFO: Pod "downwardapi-volume-25297096-4a69-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 11:50:14.985: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-25297096-4a69-11ea-95d6-0242ac110005 container client-container: 
STEP: delete the pod
Feb  8 11:50:15.047: INFO: Waiting for pod downwardapi-volume-25297096-4a69-11ea-95d6-0242ac110005 to disappear
Feb  8 11:50:15.059: INFO: Pod downwardapi-volume-25297096-4a69-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:50:15.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mdl4w" for this suite.
Feb  8 11:50:21.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:50:21.282: INFO: namespace: e2e-tests-projected-mdl4w, resource: bindings, ignored listing per whitelist
Feb  8 11:50:21.486: INFO: namespace e2e-tests-projected-mdl4w deletion completed in 6.346413843s

• [SLOW TEST:17.352 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:50:21.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  8 11:50:21.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-cw62k'
Feb  8 11:50:23.889: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  8 11:50:23.890: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb  8 11:50:25.974: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-4vp5z]
Feb  8 11:50:25.974: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-4vp5z" in namespace "e2e-tests-kubectl-cw62k" to be "running and ready"
Feb  8 11:50:25.982: INFO: Pod "e2e-test-nginx-rc-4vp5z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096245ms
Feb  8 11:50:28.007: INFO: Pod "e2e-test-nginx-rc-4vp5z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032531538s
Feb  8 11:50:30.750: INFO: Pod "e2e-test-nginx-rc-4vp5z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.775979363s
Feb  8 11:50:32.764: INFO: Pod "e2e-test-nginx-rc-4vp5z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.790322772s
Feb  8 11:50:34.784: INFO: Pod "e2e-test-nginx-rc-4vp5z": Phase="Running", Reason="", readiness=true. Elapsed: 8.81009455s
Feb  8 11:50:34.784: INFO: Pod "e2e-test-nginx-rc-4vp5z" satisfied condition "running and ready"
Feb  8 11:50:34.784: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-4vp5z]
Feb  8 11:50:34.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-cw62k'
Feb  8 11:50:35.063: INFO: stderr: ""
Feb  8 11:50:35.063: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Feb  8 11:50:35.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-cw62k'
Feb  8 11:50:35.236: INFO: stderr: ""
Feb  8 11:50:35.236: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:50:35.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-cw62k" for this suite.
Feb  8 11:50:57.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:50:57.387: INFO: namespace: e2e-tests-kubectl-cw62k, resource: bindings, ignored listing per whitelist
Feb  8 11:50:57.467: INFO: namespace e2e-tests-kubectl-cw62k deletion completed in 22.218702852s

• [SLOW TEST:35.981 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:50:57.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-44ec4d44-4a69-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  8 11:50:57.679: INFO: Waiting up to 5m0s for pod "pod-configmaps-44ed39d1-4a69-11ea-95d6-0242ac110005" in namespace "e2e-tests-configmap-dm24s" to be "success or failure"
Feb  8 11:50:57.701: INFO: Pod "pod-configmaps-44ed39d1-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.813373ms
Feb  8 11:50:59.770: INFO: Pod "pod-configmaps-44ed39d1-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090950083s
Feb  8 11:51:01.808: INFO: Pod "pod-configmaps-44ed39d1-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128187607s
Feb  8 11:51:03.842: INFO: Pod "pod-configmaps-44ed39d1-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.16259568s
Feb  8 11:51:05.864: INFO: Pod "pod-configmaps-44ed39d1-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.184298366s
Feb  8 11:51:07.895: INFO: Pod "pod-configmaps-44ed39d1-4a69-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.215798136s
STEP: Saw pod success
Feb  8 11:51:07.895: INFO: Pod "pod-configmaps-44ed39d1-4a69-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 11:51:07.901: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-44ed39d1-4a69-11ea-95d6-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  8 11:51:07.998: INFO: Waiting for pod pod-configmaps-44ed39d1-4a69-11ea-95d6-0242ac110005 to disappear
Feb  8 11:51:08.180: INFO: Pod pod-configmaps-44ed39d1-4a69-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:51:08.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-dm24s" for this suite.
Feb  8 11:51:14.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:51:14.283: INFO: namespace: e2e-tests-configmap-dm24s, resource: bindings, ignored listing per whitelist
Feb  8 11:51:14.329: INFO: namespace e2e-tests-configmap-dm24s deletion completed in 6.138618409s

• [SLOW TEST:16.862 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:51:14.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  8 11:51:14.599: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:51:31.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-67664" for this suite.
Feb  8 11:51:39.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:51:40.018: INFO: namespace: e2e-tests-init-container-67664, resource: bindings, ignored listing per whitelist
Feb  8 11:51:40.056: INFO: namespace e2e-tests-init-container-67664 deletion completed in 8.246020973s

• [SLOW TEST:25.727 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:51:40.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  8 11:51:40.341: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5e50177b-4a69-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-chtkm" to be "success or failure"
Feb  8 11:51:40.359: INFO: Pod "downwardapi-volume-5e50177b-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.176567ms
Feb  8 11:51:42.667: INFO: Pod "downwardapi-volume-5e50177b-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326280153s
Feb  8 11:51:44.674: INFO: Pod "downwardapi-volume-5e50177b-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333493889s
Feb  8 11:51:46.695: INFO: Pod "downwardapi-volume-5e50177b-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.354208146s
Feb  8 11:51:48.880: INFO: Pod "downwardapi-volume-5e50177b-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.53927198s
Feb  8 11:51:50.894: INFO: Pod "downwardapi-volume-5e50177b-4a69-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.553046918s
STEP: Saw pod success
Feb  8 11:51:50.894: INFO: Pod "downwardapi-volume-5e50177b-4a69-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 11:51:50.901: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5e50177b-4a69-11ea-95d6-0242ac110005 container client-container: 
STEP: delete the pod
Feb  8 11:51:50.981: INFO: Waiting for pod downwardapi-volume-5e50177b-4a69-11ea-95d6-0242ac110005 to disappear
Feb  8 11:51:51.041: INFO: Pod downwardapi-volume-5e50177b-4a69-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:51:51.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-chtkm" for this suite.
Feb  8 11:51:59.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:51:59.140: INFO: namespace: e2e-tests-projected-chtkm, resource: bindings, ignored listing per whitelist
Feb  8 11:51:59.446: INFO: namespace e2e-tests-projected-chtkm deletion completed in 8.396754172s

• [SLOW TEST:19.390 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:51:59.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-69f55b01-4a69-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  8 11:52:00.054: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-69f6da1a-4a69-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-hlg68" to be "success or failure"
Feb  8 11:52:00.076: INFO: Pod "pod-projected-secrets-69f6da1a-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.233383ms
Feb  8 11:52:02.096: INFO: Pod "pod-projected-secrets-69f6da1a-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041902728s
Feb  8 11:52:04.117: INFO: Pod "pod-projected-secrets-69f6da1a-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062876444s
Feb  8 11:52:06.446: INFO: Pod "pod-projected-secrets-69f6da1a-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.392604033s
Feb  8 11:52:08.469: INFO: Pod "pod-projected-secrets-69f6da1a-4a69-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.414912709s
Feb  8 11:52:10.491: INFO: Pod "pod-projected-secrets-69f6da1a-4a69-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.437624333s
STEP: Saw pod success
Feb  8 11:52:10.491: INFO: Pod "pod-projected-secrets-69f6da1a-4a69-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 11:52:10.500: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-69f6da1a-4a69-11ea-95d6-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  8 11:52:11.357: INFO: Waiting for pod pod-projected-secrets-69f6da1a-4a69-11ea-95d6-0242ac110005 to disappear
Feb  8 11:52:11.386: INFO: Pod pod-projected-secrets-69f6da1a-4a69-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:52:11.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hlg68" for this suite.
Feb  8 11:52:17.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:52:17.689: INFO: namespace: e2e-tests-projected-hlg68, resource: bindings, ignored listing per whitelist
Feb  8 11:52:17.757: INFO: namespace e2e-tests-projected-hlg68 deletion completed in 6.360046168s

• [SLOW TEST:18.311 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:52:17.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb  8 11:52:30.115: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-74d500fb-4a69-11ea-95d6-0242ac110005,GenerateName:,Namespace:e2e-tests-events-n2q6k,SelfLink:/api/v1/namespaces/e2e-tests-events-n2q6k/pods/send-events-74d500fb-4a69-11ea-95d6-0242ac110005,UID:74d77279-4a69-11ea-a994-fa163e34d433,ResourceVersion:20972184,Generation:0,CreationTimestamp:2020-02-08 11:52:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 35956063,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pnqgx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pnqgx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-pnqgx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00087c520} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00087c550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:52:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:52:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:52:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 11:52:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-08 11:52:18 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-08 11:52:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://9ba8765dcb69cfb2bb92eeae29b1691fdaa8d017e54651d11b0700454673c1a7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb  8 11:52:32.128: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb  8 11:52:34.139: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:52:34.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-n2q6k" for this suite.
Feb  8 11:53:14.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:53:14.302: INFO: namespace: e2e-tests-events-n2q6k, resource: bindings, ignored listing per whitelist
Feb  8 11:53:14.332: INFO: namespace e2e-tests-events-n2q6k deletion completed in 40.163330168s

• [SLOW TEST:56.575 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:53:14.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  8 11:53:14.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:53:23.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-d6xfz" for this suite.
Feb  8 11:54:17.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:54:17.615: INFO: namespace: e2e-tests-pods-d6xfz, resource: bindings, ignored listing per whitelist
Feb  8 11:54:17.636: INFO: namespace e2e-tests-pods-d6xfz deletion completed in 54.268963627s

• [SLOW TEST:63.304 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:54:17.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  8 11:54:17.952: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb  8 11:54:17.995: INFO: Number of nodes with available pods: 0
Feb  8 11:54:17.995: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb  8 11:54:18.062: INFO: Number of nodes with available pods: 0
Feb  8 11:54:18.062: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:19.088: INFO: Number of nodes with available pods: 0
Feb  8 11:54:19.088: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:20.081: INFO: Number of nodes with available pods: 0
Feb  8 11:54:20.081: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:21.077: INFO: Number of nodes with available pods: 0
Feb  8 11:54:21.077: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:22.085: INFO: Number of nodes with available pods: 0
Feb  8 11:54:22.085: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:23.586: INFO: Number of nodes with available pods: 0
Feb  8 11:54:23.587: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:24.076: INFO: Number of nodes with available pods: 0
Feb  8 11:54:24.076: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:25.097: INFO: Number of nodes with available pods: 0
Feb  8 11:54:25.097: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:26.075: INFO: Number of nodes with available pods: 0
Feb  8 11:54:26.075: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:27.070: INFO: Number of nodes with available pods: 1
Feb  8 11:54:27.070: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb  8 11:54:27.117: INFO: Number of nodes with available pods: 1
Feb  8 11:54:27.117: INFO: Number of running nodes: 0, number of available pods: 1
Feb  8 11:54:28.133: INFO: Number of nodes with available pods: 0
Feb  8 11:54:28.133: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb  8 11:54:28.168: INFO: Number of nodes with available pods: 0
Feb  8 11:54:28.168: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:29.184: INFO: Number of nodes with available pods: 0
Feb  8 11:54:29.184: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:30.384: INFO: Number of nodes with available pods: 0
Feb  8 11:54:30.384: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:31.178: INFO: Number of nodes with available pods: 0
Feb  8 11:54:31.178: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:32.177: INFO: Number of nodes with available pods: 0
Feb  8 11:54:32.177: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:33.179: INFO: Number of nodes with available pods: 0
Feb  8 11:54:33.179: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:34.179: INFO: Number of nodes with available pods: 0
Feb  8 11:54:34.179: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:35.196: INFO: Number of nodes with available pods: 0
Feb  8 11:54:35.196: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:36.182: INFO: Number of nodes with available pods: 0
Feb  8 11:54:36.182: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:37.233: INFO: Number of nodes with available pods: 0
Feb  8 11:54:37.233: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:38.184: INFO: Number of nodes with available pods: 0
Feb  8 11:54:38.184: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:39.185: INFO: Number of nodes with available pods: 0
Feb  8 11:54:39.185: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:40.181: INFO: Number of nodes with available pods: 0
Feb  8 11:54:40.181: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:41.186: INFO: Number of nodes with available pods: 0
Feb  8 11:54:41.186: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:42.189: INFO: Number of nodes with available pods: 0
Feb  8 11:54:42.189: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:43.179: INFO: Number of nodes with available pods: 0
Feb  8 11:54:43.179: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:44.183: INFO: Number of nodes with available pods: 0
Feb  8 11:54:44.183: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:45.205: INFO: Number of nodes with available pods: 0
Feb  8 11:54:45.206: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:46.198: INFO: Number of nodes with available pods: 0
Feb  8 11:54:46.198: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:47.185: INFO: Number of nodes with available pods: 0
Feb  8 11:54:47.185: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:48.242: INFO: Number of nodes with available pods: 0
Feb  8 11:54:48.242: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:49.406: INFO: Number of nodes with available pods: 0
Feb  8 11:54:49.406: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:50.185: INFO: Number of nodes with available pods: 0
Feb  8 11:54:50.185: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 11:54:51.182: INFO: Number of nodes with available pods: 1
Feb  8 11:54:51.182: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-299sn, will wait for the garbage collector to delete the pods
Feb  8 11:54:51.279: INFO: Deleting DaemonSet.extensions daemon-set took: 20.807345ms
Feb  8 11:54:51.380: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.878452ms
Feb  8 11:55:02.619: INFO: Number of nodes with available pods: 0
Feb  8 11:55:02.619: INFO: Number of running nodes: 0, number of available pods: 0
Feb  8 11:55:02.636: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-299sn/daemonsets","resourceVersion":"20972454"},"items":null}

Feb  8 11:55:02.643: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-299sn/pods","resourceVersion":"20972454"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:55:02.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-299sn" for this suite.
Feb  8 11:55:09.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:55:09.152: INFO: namespace: e2e-tests-daemonsets-299sn, resource: bindings, ignored listing per whitelist
Feb  8 11:55:09.226: INFO: namespace e2e-tests-daemonsets-299sn deletion completed in 6.357402747s

• [SLOW TEST:51.589 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:55:09.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0208 11:55:12.114157       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  8 11:55:12.114: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:55:12.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-pcdph" for this suite.
Feb  8 11:55:21.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:55:21.375: INFO: namespace: e2e-tests-gc-pcdph, resource: bindings, ignored listing per whitelist
Feb  8 11:55:21.403: INFO: namespace e2e-tests-gc-pcdph deletion completed in 9.282050547s

• [SLOW TEST:12.176 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:55:21.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-e238d295-4a69-11ea-95d6-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-e238d295-4a69-11ea-95d6-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:56:48.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-z6pn8" for this suite.
Feb  8 11:57:12.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:57:12.680: INFO: namespace: e2e-tests-projected-z6pn8, resource: bindings, ignored listing per whitelist
Feb  8 11:57:12.708: INFO: namespace e2e-tests-projected-z6pn8 deletion completed in 24.33713504s

• [SLOW TEST:111.305 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:57:12.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  8 11:57:12.977: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.651764ms)
Feb  8 11:57:12.983: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.844998ms)
Feb  8 11:57:12.989: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.075571ms)
Feb  8 11:57:12.996: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.378621ms)
Feb  8 11:57:13.001: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.122ms)
Feb  8 11:57:13.006: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.391423ms)
Feb  8 11:57:13.011: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.994214ms)
Feb  8 11:57:13.016: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.041893ms)
Feb  8 11:57:13.022: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.833018ms)
Feb  8 11:57:13.027: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.815232ms)
Feb  8 11:57:13.031: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.057999ms)
Feb  8 11:57:13.035: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.146116ms)
Feb  8 11:57:13.041: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.004369ms)
Feb  8 11:57:13.114: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 72.458268ms)
Feb  8 11:57:13.120: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.04325ms)
Feb  8 11:57:13.127: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.792956ms)
Feb  8 11:57:13.137: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.750107ms)
Feb  8 11:57:13.144: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.027547ms)
Feb  8 11:57:13.150: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.168149ms)
Feb  8 11:57:13.156: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.6045ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:57:13.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-rgnj7" for this suite.
Feb  8 11:57:19.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:57:19.366: INFO: namespace: e2e-tests-proxy-rgnj7, resource: bindings, ignored listing per whitelist
Feb  8 11:57:19.415: INFO: namespace e2e-tests-proxy-rgnj7 deletion completed in 6.252290213s

• [SLOW TEST:6.706 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:57:19.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-mzfrb
Feb  8 11:57:33.890: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-mzfrb
STEP: checking the pod's current state and verifying that restartCount is present
Feb  8 11:57:33.897: INFO: Initial restart count of pod liveness-http is 0
Feb  8 11:57:50.204: INFO: Restart count of pod e2e-tests-container-probe-mzfrb/liveness-http is now 1 (16.3071372s elapsed)
Feb  8 11:58:08.385: INFO: Restart count of pod e2e-tests-container-probe-mzfrb/liveness-http is now 2 (34.487908525s elapsed)
Feb  8 11:58:28.690: INFO: Restart count of pod e2e-tests-container-probe-mzfrb/liveness-http is now 3 (54.792969707s elapsed)
Feb  8 11:58:51.946: INFO: Restart count of pod e2e-tests-container-probe-mzfrb/liveness-http is now 4 (1m18.048951813s elapsed)
Feb  8 11:59:50.598: INFO: Restart count of pod e2e-tests-container-probe-mzfrb/liveness-http is now 5 (2m16.70161175s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 11:59:50.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-mzfrb" for this suite.
Feb  8 11:59:56.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 11:59:56.984: INFO: namespace: e2e-tests-container-probe-mzfrb, resource: bindings, ignored listing per whitelist
Feb  8 11:59:57.003: INFO: namespace e2e-tests-container-probe-mzfrb deletion completed in 6.270983958s

• [SLOW TEST:157.587 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 11:59:57.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-f7mdx/configmap-test-8681ba93-4a6a-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  8 11:59:57.207: INFO: Waiting up to 5m0s for pod "pod-configmaps-86831fdd-4a6a-11ea-95d6-0242ac110005" in namespace "e2e-tests-configmap-f7mdx" to be "success or failure"
Feb  8 11:59:57.220: INFO: Pod "pod-configmaps-86831fdd-4a6a-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.950945ms
Feb  8 11:59:59.236: INFO: Pod "pod-configmaps-86831fdd-4a6a-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029340963s
Feb  8 12:00:01.283: INFO: Pod "pod-configmaps-86831fdd-4a6a-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07593935s
Feb  8 12:00:03.643: INFO: Pod "pod-configmaps-86831fdd-4a6a-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436803406s
Feb  8 12:00:05.663: INFO: Pod "pod-configmaps-86831fdd-4a6a-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.456354778s
Feb  8 12:00:07.675: INFO: Pod "pod-configmaps-86831fdd-4a6a-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.468580919s
STEP: Saw pod success
Feb  8 12:00:07.675: INFO: Pod "pod-configmaps-86831fdd-4a6a-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:00:07.679: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-86831fdd-4a6a-11ea-95d6-0242ac110005 container env-test: 
STEP: delete the pod
Feb  8 12:00:08.886: INFO: Waiting for pod pod-configmaps-86831fdd-4a6a-11ea-95d6-0242ac110005 to disappear
Feb  8 12:00:08.940: INFO: Pod pod-configmaps-86831fdd-4a6a-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:00:08.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-f7mdx" for this suite.
Feb  8 12:00:15.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:00:15.125: INFO: namespace: e2e-tests-configmap-f7mdx, resource: bindings, ignored listing per whitelist
Feb  8 12:00:15.191: INFO: namespace e2e-tests-configmap-f7mdx deletion completed in 6.23765264s

• [SLOW TEST:18.189 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:00:15.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-8tbxl
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-8tbxl to expose endpoints map[]
Feb  8 12:00:15.523: INFO: Get endpoints failed (19.405637ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb  8 12:00:16.548: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-8tbxl exposes endpoints map[] (1.043717237s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-8tbxl
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-8tbxl to expose endpoints map[pod1:[100]]
Feb  8 12:00:21.059: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.471710792s elapsed, will retry)
Feb  8 12:00:25.538: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-8tbxl exposes endpoints map[pod1:[100]] (8.950731572s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-8tbxl
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-8tbxl to expose endpoints map[pod1:[100] pod2:[101]]
Feb  8 12:00:29.968: INFO: Unexpected endpoints: found map[921060c2-4a6a-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.418590695s elapsed, will retry)
Feb  8 12:00:33.393: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-8tbxl exposes endpoints map[pod1:[100] pod2:[101]] (7.843976197s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-8tbxl
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-8tbxl to expose endpoints map[pod2:[101]]
Feb  8 12:00:34.897: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-8tbxl exposes endpoints map[pod2:[101]] (1.481359241s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-8tbxl
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-8tbxl to expose endpoints map[]
Feb  8 12:00:36.303: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-8tbxl exposes endpoints map[] (1.388054565s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:00:36.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-8tbxl" for this suite.
Feb  8 12:00:42.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:00:42.941: INFO: namespace: e2e-tests-services-8tbxl, resource: bindings, ignored listing per whitelist
Feb  8 12:00:42.975: INFO: namespace e2e-tests-services-8tbxl deletion completed in 6.393710251s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:27.784 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:00:42.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:00:56.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-czsfm" for this suite.
Feb  8 12:01:20.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:01:20.419: INFO: namespace: e2e-tests-replication-controller-czsfm, resource: bindings, ignored listing per whitelist
Feb  8 12:01:20.532: INFO: namespace e2e-tests-replication-controller-czsfm deletion completed in 24.211049428s

• [SLOW TEST:37.557 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:01:20.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb  8 12:01:20.792: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  8 12:01:20.800: INFO: Waiting for terminating namespaces to be deleted...
Feb  8 12:01:20.802: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb  8 12:01:20.817: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  8 12:01:20.817: INFO: 	Container coredns ready: true, restart count 0
Feb  8 12:01:20.817: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb  8 12:01:20.817: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  8 12:01:20.817: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  8 12:01:20.817: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb  8 12:01:20.817: INFO: 	Container weave ready: true, restart count 0
Feb  8 12:01:20.817: INFO: 	Container weave-npc ready: true, restart count 0
Feb  8 12:01:20.817: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  8 12:01:20.817: INFO: 	Container coredns ready: true, restart count 0
Feb  8 12:01:20.817: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  8 12:01:20.817: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  8 12:01:20.817: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-be5f5d47-4a6a-11ea-95d6-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-be5f5d47-4a6a-11ea-95d6-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-be5f5d47-4a6a-11ea-95d6-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:01:43.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-njkkh" for this suite.
Feb  8 12:01:57.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:01:57.527: INFO: namespace: e2e-tests-sched-pred-njkkh, resource: bindings, ignored listing per whitelist
Feb  8 12:01:57.536: INFO: namespace e2e-tests-sched-pred-njkkh deletion completed in 14.231425399s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:37.003 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:01:57.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Feb  8 12:01:57.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-nm2nq run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb  8 12:02:10.830: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0208 12:02:08.823980    2698 log.go:172] (0xc00072e000) (0xc0005c4f00) Create stream\nI0208 12:02:08.824041    2698 log.go:172] (0xc00072e000) (0xc0005c4f00) Stream added, broadcasting: 1\nI0208 12:02:08.830708    2698 log.go:172] (0xc00072e000) Reply frame received for 1\nI0208 12:02:08.830795    2698 log.go:172] (0xc00072e000) (0xc0005c4fa0) Create stream\nI0208 12:02:08.830816    2698 log.go:172] (0xc00072e000) (0xc0005c4fa0) Stream added, broadcasting: 3\nI0208 12:02:08.832387    2698 log.go:172] (0xc00072e000) Reply frame received for 3\nI0208 12:02:08.832489    2698 log.go:172] (0xc00072e000) (0xc0007fa000) Create stream\nI0208 12:02:08.832510    2698 log.go:172] (0xc00072e000) (0xc0007fa000) Stream added, broadcasting: 5\nI0208 12:02:08.833880    2698 log.go:172] (0xc00072e000) Reply frame received for 5\nI0208 12:02:08.833948    2698 log.go:172] (0xc00072e000) (0xc000996000) Create stream\nI0208 12:02:08.833972    2698 log.go:172] (0xc00072e000) (0xc000996000) Stream added, broadcasting: 7\nI0208 12:02:08.836512    2698 log.go:172] (0xc00072e000) Reply frame received for 7\nI0208 12:02:08.837300    2698 log.go:172] (0xc0005c4fa0) (3) Writing data frame\nI0208 12:02:08.837659    2698 log.go:172] (0xc0005c4fa0) (3) Writing data frame\nI0208 12:02:08.851166    2698 log.go:172] (0xc00072e000) Data frame received for 5\nI0208 12:02:08.851196    2698 log.go:172] (0xc0007fa000) (5) Data frame handling\nI0208 12:02:08.851220    2698 log.go:172] (0xc0007fa000) (5) Data frame sent\nI0208 12:02:08.851228    2698 log.go:172] (0xc00072e000) Data frame received for 5\nI0208 12:02:08.851231    2698 log.go:172] (0xc0007fa000) (5) Data frame handling\nI0208 12:02:08.851254    2698 log.go:172] (0xc0007fa000) (5) Data frame sent\nI0208 12:02:10.771129    2698 log.go:172] (0xc00072e000) Data frame received for 1\nI0208 12:02:10.771325    2698 log.go:172] (0xc0005c4f00) (1) Data frame handling\nI0208 12:02:10.771358    2698 log.go:172] (0xc0005c4f00) (1) Data frame sent\nI0208 12:02:10.771426    2698 log.go:172] (0xc00072e000) (0xc0005c4f00) Stream removed, broadcasting: 1\nI0208 12:02:10.773191    2698 log.go:172] (0xc00072e000) (0xc0005c4fa0) Stream removed, broadcasting: 3\nI0208 12:02:10.773373    2698 log.go:172] (0xc00072e000) (0xc0007fa000) Stream removed, broadcasting: 5\nI0208 12:02:10.773526    2698 log.go:172] (0xc00072e000) (0xc000996000) Stream removed, broadcasting: 7\nI0208 12:02:10.773609    2698 log.go:172] (0xc00072e000) Go away received\nI0208 12:02:10.773886    2698 log.go:172] (0xc00072e000) (0xc0005c4f00) Stream removed, broadcasting: 1\nI0208 12:02:10.773945    2698 log.go:172] (0xc00072e000) (0xc0005c4fa0) Stream removed, broadcasting: 3\nI0208 12:02:10.773972    2698 log.go:172] (0xc00072e000) (0xc0007fa000) Stream removed, broadcasting: 5\nI0208 12:02:10.773997    2698 log.go:172] (0xc00072e000) (0xc000996000) Stream removed, broadcasting: 7\n"
Feb  8 12:02:10.831: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:02:12.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nm2nq" for this suite.
Feb  8 12:02:18.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:02:19.064: INFO: namespace: e2e-tests-kubectl-nm2nq, resource: bindings, ignored listing per whitelist
Feb  8 12:02:19.137: INFO: namespace e2e-tests-kubectl-nm2nq deletion completed in 6.217148106s

• [SLOW TEST:21.601 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:02:19.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-db3f7f61-4a6a-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  8 12:02:19.390: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-db424825-4a6a-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-sn625" to be "success or failure"
Feb  8 12:02:19.449: INFO: Pod "pod-projected-configmaps-db424825-4a6a-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 59.415859ms
Feb  8 12:02:22.264: INFO: Pod "pod-projected-configmaps-db424825-4a6a-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.874103054s
Feb  8 12:02:24.294: INFO: Pod "pod-projected-configmaps-db424825-4a6a-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.903460821s
Feb  8 12:02:26.908: INFO: Pod "pod-projected-configmaps-db424825-4a6a-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.517940537s
Feb  8 12:02:28.929: INFO: Pod "pod-projected-configmaps-db424825-4a6a-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.538966854s
Feb  8 12:02:30.945: INFO: Pod "pod-projected-configmaps-db424825-4a6a-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.555459048s
STEP: Saw pod success
Feb  8 12:02:30.946: INFO: Pod "pod-projected-configmaps-db424825-4a6a-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:02:30.952: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-db424825-4a6a-11ea-95d6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  8 12:02:31.074: INFO: Waiting for pod pod-projected-configmaps-db424825-4a6a-11ea-95d6-0242ac110005 to disappear
Feb  8 12:02:31.091: INFO: Pod pod-projected-configmaps-db424825-4a6a-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:02:31.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-sn625" for this suite.
Feb  8 12:02:37.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:02:37.243: INFO: namespace: e2e-tests-projected-sn625, resource: bindings, ignored listing per whitelist
Feb  8 12:02:37.280: INFO: namespace e2e-tests-projected-sn625 deletion completed in 6.178372608s

• [SLOW TEST:18.143 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:02:37.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0208 12:03:19.714740       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  8 12:03:19.714: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:03:19.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-66b4b" for this suite.
Feb  8 12:03:29.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:03:29.832: INFO: namespace: e2e-tests-gc-66b4b, resource: bindings, ignored listing per whitelist
Feb  8 12:03:29.866: INFO: namespace e2e-tests-gc-66b4b deletion completed in 10.146586785s

• [SLOW TEST:52.585 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:03:29.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-06323eb3-4a6b-11ea-95d6-0242ac110005
STEP: Creating secret with name s-test-opt-upd-06323f9d-4a6b-11ea-95d6-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-06323eb3-4a6b-11ea-95d6-0242ac110005
STEP: Updating secret s-test-opt-upd-06323f9d-4a6b-11ea-95d6-0242ac110005
STEP: Creating secret with name s-test-opt-create-06324012-4a6b-11ea-95d6-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:04:02.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5prvs" for this suite.
Feb  8 12:04:28.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:04:28.383: INFO: namespace: e2e-tests-secrets-5prvs, resource: bindings, ignored listing per whitelist
Feb  8 12:04:28.389: INFO: namespace e2e-tests-secrets-5prvs deletion completed in 26.22805574s

• [SLOW TEST:58.522 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:04:28.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  8 12:04:28.631: INFO: Waiting up to 5m0s for pod "downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-xmmq4" to be "success or failure"
Feb  8 12:04:28.640: INFO: Pod "downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.716992ms
Feb  8 12:04:30.656: INFO: Pod "downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02487987s
Feb  8 12:04:32.677: INFO: Pod "downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04569948s
Feb  8 12:04:35.566: INFO: Pod "downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.935053217s
Feb  8 12:04:37.758: INFO: Pod "downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.126721824s
Feb  8 12:04:39.779: INFO: Pod "downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005": Phase="Failed", Reason="", readiness=false. Elapsed: 11.147892112s
Feb  8 12:04:39.826: INFO: Output of node "hunter-server-hu5at5svl7ps" pod "downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005" container "client-container": 
STEP: delete the pod
Feb  8 12:04:40.892: INFO: Waiting for pod downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005 to disappear
Feb  8 12:04:41.082: INFO: Pod downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005 no longer exists
Feb  8 12:04:41.083: INFO: Unexpected error occurred: expected pod "downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005" success: pod "downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 12:04:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 12:04:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 12:04:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 12:04:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.96.1.240 PodIP:10.32.0.4 StartTime:2020-02-08 12:04:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:ttrpc: client shutting down: ttrpc: closed: unknown,StartedAt:2020-02-08 12:04:34 +0000 UTC,FinishedAt:2020-02-08 12:04:34 +0000 UTC,ContainerID:docker://aa0b9c497a2049e736cdde706c46eb9ca6a94fe52926f527a10b60b8dcaa105c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://aa0b9c497a2049e736cdde706c46eb9ca6a94fe52926f527a10b60b8dcaa105c}] QOSClass:Burstable}
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
STEP: Collecting events from namespace "e2e-tests-projected-xmmq4".
STEP: Found 4 events.
Feb  8 12:04:41.094: INFO: At 2020-02-08 12:04:28 +0000 UTC - event for downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005: {default-scheduler } Scheduled: Successfully assigned e2e-tests-projected-xmmq4/downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005 to hunter-server-hu5at5svl7ps
Feb  8 12:04:41.094: INFO: At 2020-02-08 12:04:34 +0000 UTC - event for downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005: {kubelet hunter-server-hu5at5svl7ps} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine
Feb  8 12:04:41.094: INFO: At 2020-02-08 12:04:36 +0000 UTC - event for downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005: {kubelet hunter-server-hu5at5svl7ps} Created: Created container
Feb  8 12:04:41.094: INFO: At 2020-02-08 12:04:37 +0000 UTC - event for downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005: {kubelet hunter-server-hu5at5svl7ps} Failed: Error: failed to start container "client-container": Error response from daemon: ttrpc: client shutting down: ttrpc: closed: unknown
Feb  8 12:04:41.121: INFO: POD                                                 NODE                        PHASE    GRACE  CONDITIONS
Feb  8 12:04:41.121: INFO: coredns-54ff9cd656-79kxx                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Feb  8 12:04:41.121: INFO: coredns-54ff9cd656-bmkk4                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Feb  8 12:04:41.121: INFO: etcd-hunter-server-hu5at5svl7ps                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Feb  8 12:04:41.121: INFO: kube-apiserver-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Feb  8 12:04:41.121: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Feb  8 12:04:41.121: INFO: kube-proxy-bqnnz                                    hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:22 +0000 UTC  }]
Feb  8 12:04:41.121: INFO: kube-scheduler-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 05:36:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 05:36:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Feb  8 12:04:41.121: INFO: weave-net-tqwf2                                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:20:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:20:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  }]
Feb  8 12:04:41.121: INFO: 
Feb  8 12:04:41.136: INFO: 
Logging node info for node hunter-server-hu5at5svl7ps
Feb  8 12:04:41.146: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-server-hu5at5svl7ps,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-server-hu5at5svl7ps,UID:79f3887d-b692-11e9-a994-fa163e34d433,ResourceVersion:20973710,Generation:0,CreationTimestamp:2019-08-04 08:33:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-server-hu5at5svl7ps,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:33:41 +0000 UTC 2019-08-04 08:33:41 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-02-08 12:04:36 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-02-08 12:04:36 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-02-08 12:04:36 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-02-08 12:04:36 +0000 UTC 2019-08-04 08:33:44 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.1.240} {Hostname hunter-server-hu5at5svl7ps}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09742db8afaa4010be44cec974ef8dd2,SystemUUID:09742DB8-AFAA-4010-BE44-CEC974EF8DD2,BootID:e5092afb-2b29-4458-9662-9eee6c0a1f90,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.13.8,KubeProxyVersion:v1.13.8,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 k8s.gcr.io/etcd:3.2.24] 219655340} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[k8s.gcr.io/kube-apiserver@sha256:782fb3e5e34a3025e5c2fc92d5a73fc5eb5223fbd1760a551f2d02e1b484c899 k8s.gcr.io/kube-apiserver:v1.13.8] 181093118} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[k8s.gcr.io/kube-controller-manager@sha256:46889a90fff5324ad813c1024d0b7713a5529117570e3611657a0acfb58c8f43 k8s.gcr.io/kube-controller-manager:v1.13.8] 146353566} {[nginx@sha256:70821e443be75ea38bdf52a974fd2271babd5875b2b1964f05025981c75a6717] 126698067} {[nginx@sha256:ad5552c786f128e389a0263104ae39f3d3c7895579d45ae716f528185b36bc6f nginx:latest] 126698063} {[nginx@sha256:662b1a542362596b094b0b3fa30a8528445b75aed9f2d009f72401a0f8870c1f nginx@sha256:9916837e6b165e967e2beb5a586b1c980084d08eb3b3d7f79178a0c79426d880] 126346569} {[nginx@sha256:8aa7f6a9585d908a63e5e418dc5d14ae7467d2e36e1ab4f0d8f9d059a3d071ce] 126324348} {[nginx@sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2] 126323778} {[nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566 nginx@sha256:73113849b52b099e447eabb83a2722635562edc798f5b86bdf853faa0a49ec70] 126323486} {[nginx@sha256:922c815aa4df050d4df476e92daed4231f466acc8ee90e0e774951b0fd7195a4] 126215561} {[nginx@sha256:77ebc94e0cec30b20f9056bac1066b09fbdc049401b71850922c63fc0cc1762e] 125993293} {[nginx@sha256:9688d0dae8812dd2437947b756393eb0779487e361aa2ffbc3a529dca61f102c] 125976833} {[nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1] 125972845} {[nginx@sha256:1a8935aae56694cee3090d39df51b4e7fcbfe6877df24a4c5c0782dfeccc97e1 nginx@sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9 nginx@sha256:a8517b1d89209c88eeb48709bc06d706c261062813720a352a8e4f8d96635d9d] 125958368} {[nginx@sha256:5411d8897c3da841a1f45f895b43ad4526eb62d3393c3287124a56be49962d41] 125850912} {[nginx@sha256:eb3320e2f9ca409b7c0aa71aea3cf7ce7d018f03a372564dbdb023646958770b] 125850346} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:c27502f9ab958f59f95bda6a4ffd266e3ca42a75aae641db4aac7e93dd383b6e k8s.gcr.io/kube-proxy:v1.13.8] 80245404} {[k8s.gcr.io/kube-scheduler@sha256:fdcc2d056ba5937f66301b9071b2c322fad53254e6ddf277592d99f267e5745f k8s.gcr.io/kube-scheduler:v1.13.8] 79601406} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51 k8s.gcr.io/coredns:1.2.6] 40017418} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Feb  8 12:04:41.147: INFO: 
Logging kubelet events for node hunter-server-hu5at5svl7ps
Feb  8 12:04:41.154: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps
Feb  8 12:04:41.175: INFO: weave-net-tqwf2 started at 2019-08-04 08:33:23 +0000 UTC (0+2 container statuses recorded)
Feb  8 12:04:41.175: INFO: 	Container weave ready: true, restart count 0
Feb  8 12:04:41.175: INFO: 	Container weave-npc ready: true, restart count 0
Feb  8 12:04:41.175: INFO: coredns-54ff9cd656-bmkk4 started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Feb  8 12:04:41.175: INFO: 	Container coredns ready: true, restart count 0
Feb  8 12:04:41.175: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Feb  8 12:04:41.175: INFO: kube-apiserver-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Feb  8 12:04:41.175: INFO: kube-scheduler-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Feb  8 12:04:41.175: INFO: coredns-54ff9cd656-79kxx started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Feb  8 12:04:41.175: INFO: 	Container coredns ready: true, restart count 0
Feb  8 12:04:41.175: INFO: kube-proxy-bqnnz started at 2019-08-04 08:33:23 +0000 UTC (0+1 container statuses recorded)
Feb  8 12:04:41.175: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  8 12:04:41.175: INFO: etcd-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
W0208 12:04:41.180148       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  8 12:04:41.226: INFO: 
Latency metrics for node hunter-server-hu5at5svl7ps
Feb  8 12:04:41.226: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:34.634291s}
Feb  8 12:04:41.226: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:34.200237s}
Feb  8 12:04:41.226: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:12.042695s}
Feb  8 12:04:41.226: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:12.012707s}
Feb  8 12:04:41.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xmmq4" for this suite.
Feb  8 12:04:49.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:04:49.469: INFO: namespace: e2e-tests-projected-xmmq4, resource: bindings, ignored listing per whitelist
Feb  8 12:04:49.473: INFO: namespace e2e-tests-projected-xmmq4 deletion completed in 8.238923883s

• Failure [21.084 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Expected error:
      <*errors.errorString | 0xc001b30410>: {
          s: "expected pod \"downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005\" success: pod \"downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 12:04:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 12:04:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 12:04:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 12:04:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.96.1.240 PodIP:10.32.0.4 StartTime:2020-02-08 12:04:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:ttrpc: client shutting down: ttrpc: closed: unknown,StartedAt:2020-02-08 12:04:34 +0000 UTC,FinishedAt:2020-02-08 12:04:34 +0000 UTC,ContainerID:docker://aa0b9c497a2049e736cdde706c46eb9ca6a94fe52926f527a10b60b8dcaa105c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://aa0b9c497a2049e736cdde706c46eb9ca6a94fe52926f527a10b60b8dcaa105c}] QOSClass:Burstable}",
      }
      expected pod "downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005" success: pod "downwardapi-volume-284b67f0-4a6b-11ea-95d6-0242ac110005" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 12:04:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 12:04:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 12:04:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 12:04:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.96.1.240 PodIP:10.32.0.4 StartTime:2020-02-08 12:04:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:ttrpc: client shutting down: ttrpc: closed: unknown,StartedAt:2020-02-08 12:04:34 +0000 UTC,FinishedAt:2020-02-08 12:04:34 +0000 UTC,ContainerID:docker://aa0b9c497a2049e736cdde706c46eb9ca6a94fe52926f527a10b60b8dcaa105c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://aa0b9c497a2049e736cdde706c46eb9ca6a94fe52926f527a10b60b8dcaa105c}] QOSClass:Burstable}
  not to have occurred

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2395
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:04:49.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  8 12:04:49.774: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb  8 12:04:54.786: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  8 12:04:58.805: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  8 12:04:58.958: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-jcb5w,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jcb5w/deployments/test-cleanup-deployment,UID:3a4b9145-4a6b-11ea-a994-fa163e34d433,ResourceVersion:20973767,Generation:1,CreationTimestamp:2020-02-08 12:04:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb  8 12:04:58.966: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:04:58.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-jcb5w" for this suite.
Feb  8 12:05:11.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:05:11.079: INFO: namespace: e2e-tests-deployment-jcb5w, resource: bindings, ignored listing per whitelist
Feb  8 12:05:11.164: INFO: namespace e2e-tests-deployment-jcb5w deletion completed in 12.16732778s

• [SLOW TEST:21.691 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:05:11.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb  8 12:05:12.788: INFO: Pod name wrapped-volume-race-428a9ae7-4a6b-11ea-95d6-0242ac110005: Found 0 pods out of 5
Feb  8 12:05:17.819: INFO: Pod name wrapped-volume-race-428a9ae7-4a6b-11ea-95d6-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-428a9ae7-4a6b-11ea-95d6-0242ac110005 in namespace e2e-tests-emptydir-wrapper-9796z, will wait for the garbage collector to delete the pods
Feb  8 12:07:09.998: INFO: Deleting ReplicationController wrapped-volume-race-428a9ae7-4a6b-11ea-95d6-0242ac110005 took: 29.666933ms
Feb  8 12:07:10.398: INFO: Terminating ReplicationController wrapped-volume-race-428a9ae7-4a6b-11ea-95d6-0242ac110005 pods took: 400.917166ms
STEP: Creating RC which spawns configmap-volume pods
Feb  8 12:08:05.251: INFO: Pod name wrapped-volume-race-a83186c5-4a6b-11ea-95d6-0242ac110005: Found 0 pods out of 5
Feb  8 12:08:10.287: INFO: Pod name wrapped-volume-race-a83186c5-4a6b-11ea-95d6-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-a83186c5-4a6b-11ea-95d6-0242ac110005 in namespace e2e-tests-emptydir-wrapper-9796z, will wait for the garbage collector to delete the pods
Feb  8 12:10:36.515: INFO: Deleting ReplicationController wrapped-volume-race-a83186c5-4a6b-11ea-95d6-0242ac110005 took: 97.83249ms
Feb  8 12:10:36.816: INFO: Terminating ReplicationController wrapped-volume-race-a83186c5-4a6b-11ea-95d6-0242ac110005 pods took: 300.692597ms
STEP: Creating RC which spawns configmap-volume pods
Feb  8 12:11:23.681: INFO: Pod name wrapped-volume-race-1f979253-4a6c-11ea-95d6-0242ac110005: Found 0 pods out of 5
Feb  8 12:11:28.712: INFO: Pod name wrapped-volume-race-1f979253-4a6c-11ea-95d6-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-1f979253-4a6c-11ea-95d6-0242ac110005 in namespace e2e-tests-emptydir-wrapper-9796z, will wait for the garbage collector to delete the pods
Feb  8 12:13:23.022: INFO: Deleting ReplicationController wrapped-volume-race-1f979253-4a6c-11ea-95d6-0242ac110005 took: 42.404286ms
Feb  8 12:13:23.623: INFO: Terminating ReplicationController wrapped-volume-race-1f979253-4a6c-11ea-95d6-0242ac110005 pods took: 600.771267ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:14:06.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-9796z" for this suite.
Feb  8 12:14:16.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:14:16.903: INFO: namespace: e2e-tests-emptydir-wrapper-9796z, resource: bindings, ignored listing per whitelist
Feb  8 12:14:16.931: INFO: namespace e2e-tests-emptydir-wrapper-9796z deletion completed in 10.175460316s

• [SLOW TEST:545.767 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:14:16.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-870d3ad2-4a6c-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  8 12:14:17.136: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-870ec8d2-4a6c-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-wv4nz" to be "success or failure"
Feb  8 12:14:17.160: INFO: Pod "pod-projected-secrets-870ec8d2-4a6c-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.834121ms
Feb  8 12:14:19.547: INFO: Pod "pod-projected-secrets-870ec8d2-4a6c-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.410789612s
Feb  8 12:14:21.638: INFO: Pod "pod-projected-secrets-870ec8d2-4a6c-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.501681535s
Feb  8 12:14:23.658: INFO: Pod "pod-projected-secrets-870ec8d2-4a6c-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.521516247s
Feb  8 12:14:26.281: INFO: Pod "pod-projected-secrets-870ec8d2-4a6c-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.144220126s
Feb  8 12:14:28.298: INFO: Pod "pod-projected-secrets-870ec8d2-4a6c-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.161887625s
Feb  8 12:14:30.320: INFO: Pod "pod-projected-secrets-870ec8d2-4a6c-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.183274326s
STEP: Saw pod success
Feb  8 12:14:30.320: INFO: Pod "pod-projected-secrets-870ec8d2-4a6c-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:14:30.327: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-870ec8d2-4a6c-11ea-95d6-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  8 12:14:30.505: INFO: Waiting for pod pod-projected-secrets-870ec8d2-4a6c-11ea-95d6-0242ac110005 to disappear
Feb  8 12:14:30.558: INFO: Pod pod-projected-secrets-870ec8d2-4a6c-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:14:30.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wv4nz" for this suite.
Feb  8 12:14:36.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:14:36.836: INFO: namespace: e2e-tests-projected-wv4nz, resource: bindings, ignored listing per whitelist
Feb  8 12:14:36.919: INFO: namespace e2e-tests-projected-wv4nz deletion completed in 6.340539573s

• [SLOW TEST:19.988 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:14:36.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  8 12:14:37.047: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:15:02.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-qj54z" for this suite.
Feb  8 12:15:26.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:15:26.818: INFO: namespace: e2e-tests-init-container-qj54z, resource: bindings, ignored listing per whitelist
Feb  8 12:15:26.896: INFO: namespace e2e-tests-init-container-qj54z deletion completed in 24.157687141s

• [SLOW TEST:49.976 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:15:26.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Feb  8 12:15:27.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-drzjw'
Feb  8 12:15:29.055: INFO: stderr: ""
Feb  8 12:15:29.055: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Feb  8 12:15:30.072: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:15:30.072: INFO: Found 0 / 1
Feb  8 12:15:31.074: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:15:31.074: INFO: Found 0 / 1
Feb  8 12:15:32.071: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:15:32.071: INFO: Found 0 / 1
Feb  8 12:15:33.072: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:15:33.073: INFO: Found 0 / 1
Feb  8 12:15:34.081: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:15:34.081: INFO: Found 0 / 1
Feb  8 12:15:35.072: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:15:35.072: INFO: Found 0 / 1
Feb  8 12:15:36.076: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:15:36.076: INFO: Found 0 / 1
Feb  8 12:15:37.077: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:15:37.077: INFO: Found 0 / 1
Feb  8 12:15:38.069: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:15:38.069: INFO: Found 0 / 1
Feb  8 12:15:39.089: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:15:39.089: INFO: Found 1 / 1
Feb  8 12:15:39.090: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  8 12:15:39.099: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:15:39.099: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb  8 12:15:39.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rvqhc redis-master --namespace=e2e-tests-kubectl-drzjw'
Feb  8 12:15:39.313: INFO: stderr: ""
Feb  8 12:15:39.313: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 08 Feb 12:15:37.403 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 Feb 12:15:37.403 # Server started, Redis version 3.2.12\n1:M 08 Feb 12:15:37.403 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 Feb 12:15:37.403 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb  8 12:15:39.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rvqhc redis-master --namespace=e2e-tests-kubectl-drzjw --tail=1'
Feb  8 12:15:39.447: INFO: stderr: ""
Feb  8 12:15:39.447: INFO: stdout: "1:M 08 Feb 12:15:37.403 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb  8 12:15:39.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rvqhc redis-master --namespace=e2e-tests-kubectl-drzjw --limit-bytes=1'
Feb  8 12:15:39.584: INFO: stderr: ""
Feb  8 12:15:39.584: INFO: stdout: " "
STEP: exposing timestamps
Feb  8 12:15:39.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rvqhc redis-master --namespace=e2e-tests-kubectl-drzjw --tail=1 --timestamps'
Feb  8 12:15:39.767: INFO: stderr: ""
Feb  8 12:15:39.768: INFO: stdout: "2020-02-08T12:15:37.404874191Z 1:M 08 Feb 12:15:37.403 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb  8 12:15:42.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rvqhc redis-master --namespace=e2e-tests-kubectl-drzjw --since=1s'
Feb  8 12:15:42.464: INFO: stderr: ""
Feb  8 12:15:42.464: INFO: stdout: ""
Feb  8 12:15:42.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rvqhc redis-master --namespace=e2e-tests-kubectl-drzjw --since=24h'
Feb  8 12:15:42.640: INFO: stderr: ""
Feb  8 12:15:42.640: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 08 Feb 12:15:37.403 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 Feb 12:15:37.403 # Server started, Redis version 3.2.12\n1:M 08 Feb 12:15:37.403 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 Feb 12:15:37.403 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Feb  8 12:15:42.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-drzjw'
Feb  8 12:15:42.778: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  8 12:15:42.778: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb  8 12:15:42.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-drzjw'
Feb  8 12:15:42.944: INFO: stderr: "No resources found.\n"
Feb  8 12:15:42.944: INFO: stdout: ""
Feb  8 12:15:42.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-drzjw -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  8 12:15:43.254: INFO: stderr: ""
Feb  8 12:15:43.255: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:15:43.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-drzjw" for this suite.
Feb  8 12:16:07.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:16:07.512: INFO: namespace: e2e-tests-kubectl-drzjw, resource: bindings, ignored listing per whitelist
Feb  8 12:16:07.531: INFO: namespace e2e-tests-kubectl-drzjw deletion completed in 24.261612895s

• [SLOW TEST:40.635 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:16:07.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-c4nd9
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-c4nd9 to expose endpoints map[]
Feb  8 12:16:07.861: INFO: Get endpoints failed (14.205952ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb  8 12:16:08.876: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-c4nd9 exposes endpoints map[] (1.029340825s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-c4nd9
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-c4nd9 to expose endpoints map[pod1:[80]]
Feb  8 12:16:13.381: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.489474575s elapsed, will retry)
Feb  8 12:16:18.415: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-c4nd9 exposes endpoints map[pod1:[80]] (9.52323671s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-c4nd9
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-c4nd9 to expose endpoints map[pod1:[80] pod2:[80]]
Feb  8 12:16:24.148: INFO: Unexpected endpoints: found map[c9af67b2-4a6c-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (5.690137146s elapsed, will retry)
Feb  8 12:16:28.344: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-c4nd9 exposes endpoints map[pod1:[80] pod2:[80]] (9.885532603s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-c4nd9
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-c4nd9 to expose endpoints map[pod2:[80]]
Feb  8 12:16:29.562: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-c4nd9 exposes endpoints map[pod2:[80]] (1.179747006s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-c4nd9
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-c4nd9 to expose endpoints map[]
Feb  8 12:16:30.942: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-c4nd9 exposes endpoints map[] (1.359166268s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:16:31.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-c4nd9" for this suite.
Feb  8 12:16:55.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:16:55.325: INFO: namespace: e2e-tests-services-c4nd9, resource: bindings, ignored listing per whitelist
Feb  8 12:16:55.436: INFO: namespace e2e-tests-services-c4nd9 deletion completed in 24.273446565s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:47.904 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:16:55.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  8 12:17:23.824: INFO: Container started at 2020-02-08 12:17:04 +0000 UTC, pod became ready at 2020-02-08 12:17:22 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:17:23.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-5gzhh" for this suite.
Feb  8 12:17:48.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:17:48.256: INFO: namespace: e2e-tests-container-probe-5gzhh, resource: bindings, ignored listing per whitelist
Feb  8 12:17:48.343: INFO: namespace e2e-tests-container-probe-5gzhh deletion completed in 24.452095735s

• [SLOW TEST:52.907 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:17:48.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-c7tdc in namespace e2e-tests-proxy-26gst
I0208 12:17:49.074276       8 runners.go:184] Created replication controller with name: proxy-service-c7tdc, namespace: e2e-tests-proxy-26gst, replica count: 1
I0208 12:17:50.125104       8 runners.go:184] proxy-service-c7tdc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 12:17:51.125413       8 runners.go:184] proxy-service-c7tdc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 12:17:52.125992       8 runners.go:184] proxy-service-c7tdc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 12:17:53.126597       8 runners.go:184] proxy-service-c7tdc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 12:17:54.127078       8 runners.go:184] proxy-service-c7tdc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 12:17:55.127753       8 runners.go:184] proxy-service-c7tdc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 12:17:56.128585       8 runners.go:184] proxy-service-c7tdc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 12:17:57.129061       8 runners.go:184] proxy-service-c7tdc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 12:17:58.129602       8 runners.go:184] proxy-service-c7tdc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 12:17:59.130067       8 runners.go:184] proxy-service-c7tdc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0208 12:18:00.130474       8 runners.go:184] proxy-service-c7tdc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0208 12:18:01.131062       8 runners.go:184] proxy-service-c7tdc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0208 12:18:02.131813       8 runners.go:184] proxy-service-c7tdc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0208 12:18:03.132412       8 runners.go:184] proxy-service-c7tdc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0208 12:18:04.133040       8 runners.go:184] proxy-service-c7tdc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  8 12:18:04.157: INFO: setup took 15.272322429s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb  8 12:18:04.252: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-26gst/pods/proxy-service-c7tdc-lnp96:162/proxy/: bar (200; 94.295934ms)
Feb  8 12:18:04.254: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-26gst/pods/http:proxy-service-c7tdc-lnp96:162/proxy/: bar (200; 96.735184ms)
Feb  8 12:18:04.257: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-26gst/pods/proxy-service-c7tdc-lnp96/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  8 12:18:17.246: INFO: Number of nodes with available pods: 0
Feb  8 12:18:17.246: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:18:18.777: INFO: Number of nodes with available pods: 0
Feb  8 12:18:18.777: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:18:19.273: INFO: Number of nodes with available pods: 0
Feb  8 12:18:19.273: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:18:20.324: INFO: Number of nodes with available pods: 0
Feb  8 12:18:20.324: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:18:21.327: INFO: Number of nodes with available pods: 0
Feb  8 12:18:21.327: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:18:22.279: INFO: Number of nodes with available pods: 0
Feb  8 12:18:22.279: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:18:23.641: INFO: Number of nodes with available pods: 0
Feb  8 12:18:23.641: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:18:24.981: INFO: Number of nodes with available pods: 0
Feb  8 12:18:24.981: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:18:25.279: INFO: Number of nodes with available pods: 0
Feb  8 12:18:25.279: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:18:26.277: INFO: Number of nodes with available pods: 0
Feb  8 12:18:26.277: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:18:27.263: INFO: Number of nodes with available pods: 1
Feb  8 12:18:27.263: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb  8 12:18:27.330: INFO: Number of nodes with available pods: 1
Feb  8 12:18:27.330: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-xm9v5, will wait for the garbage collector to delete the pods
Feb  8 12:18:28.925: INFO: Deleting DaemonSet.extensions daemon-set took: 15.435167ms
Feb  8 12:18:29.525: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.706669ms
Feb  8 12:18:42.780: INFO: Number of nodes with available pods: 0
Feb  8 12:18:42.780: INFO: Number of running nodes: 0, number of available pods: 0
Feb  8 12:18:42.791: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-xm9v5/daemonsets","resourceVersion":"20975473"},"items":null}

Feb  8 12:18:42.804: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-xm9v5/pods","resourceVersion":"20975474"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:18:42.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-xm9v5" for this suite.
Feb  8 12:18:48.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:18:49.065: INFO: namespace: e2e-tests-daemonsets-xm9v5, resource: bindings, ignored listing per whitelist
Feb  8 12:18:49.131: INFO: namespace e2e-tests-daemonsets-xm9v5 deletion completed in 6.310406756s

• [SLOW TEST:32.130 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:18:49.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  8 12:18:49.377: INFO: PodSpec: initContainers in spec.initContainers
Feb  8 12:20:05.174: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-2958645f-4a6d-11ea-95d6-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-lmh9v", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-lmh9v/pods/pod-init-2958645f-4a6d-11ea-95d6-0242ac110005", UID:"29647926-4a6d-11ea-a994-fa163e34d433", ResourceVersion:"20975620", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716761129, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"377103497"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-cjptf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002466000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cjptf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cjptf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cjptf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0019e60a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0020f8060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0019e6180)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0019e61a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0019e61a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0019e61ac)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761129, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761129, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761129, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761129, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0022a2040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001d72070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001d720e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://50a8e3b92f436361ff0d57a58affdba860604e2abd0f1e48633710f8083f1e7e"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0022a2080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0022a2060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:20:05.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-lmh9v" for this suite.
Feb  8 12:20:29.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:20:29.494: INFO: namespace: e2e-tests-init-container-lmh9v, resource: bindings, ignored listing per whitelist
Feb  8 12:20:29.724: INFO: namespace e2e-tests-init-container-lmh9v deletion completed in 24.420212672s

• [SLOW TEST:100.591 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:20:29.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0208 12:20:46.072339       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  8 12:20:46.072: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:20:46.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-xdsft" for this suite.
Feb  8 12:21:12.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:21:12.285: INFO: namespace: e2e-tests-gc-xdsft, resource: bindings, ignored listing per whitelist
Feb  8 12:21:12.371: INFO: namespace e2e-tests-gc-xdsft deletion completed in 26.278607384s

• [SLOW TEST:42.646 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:21:12.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  8 12:21:12.647: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7eb8d151-4a6d-11ea-95d6-0242ac110005" in namespace "e2e-tests-downward-api-hvznb" to be "success or failure"
Feb  8 12:21:12.654: INFO: Pod "downwardapi-volume-7eb8d151-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.288749ms
Feb  8 12:21:14.669: INFO: Pod "downwardapi-volume-7eb8d151-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021759808s
Feb  8 12:21:16.765: INFO: Pod "downwardapi-volume-7eb8d151-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118194142s
Feb  8 12:21:19.249: INFO: Pod "downwardapi-volume-7eb8d151-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.602295663s
Feb  8 12:21:21.269: INFO: Pod "downwardapi-volume-7eb8d151-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.621761711s
Feb  8 12:21:23.287: INFO: Pod "downwardapi-volume-7eb8d151-4a6d-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.639859797s
STEP: Saw pod success
Feb  8 12:21:23.287: INFO: Pod "downwardapi-volume-7eb8d151-4a6d-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:21:23.291: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7eb8d151-4a6d-11ea-95d6-0242ac110005 container client-container: 
STEP: delete the pod
Feb  8 12:21:23.535: INFO: Waiting for pod downwardapi-volume-7eb8d151-4a6d-11ea-95d6-0242ac110005 to disappear
Feb  8 12:21:23.545: INFO: Pod downwardapi-volume-7eb8d151-4a6d-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:21:23.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hvznb" for this suite.
Feb  8 12:21:30.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:21:30.249: INFO: namespace: e2e-tests-downward-api-hvznb, resource: bindings, ignored listing per whitelist
Feb  8 12:21:30.299: INFO: namespace e2e-tests-downward-api-hvznb deletion completed in 6.743951895s

• [SLOW TEST:17.928 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:21:30.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  8 12:21:30.668: INFO: Waiting up to 5m0s for pod "downwardapi-volume-89797c9f-4a6d-11ea-95d6-0242ac110005" in namespace "e2e-tests-downward-api-pxh2k" to be "success or failure"
Feb  8 12:21:30.678: INFO: Pod "downwardapi-volume-89797c9f-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.640441ms
Feb  8 12:21:32.755: INFO: Pod "downwardapi-volume-89797c9f-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087570538s
Feb  8 12:21:34.805: INFO: Pod "downwardapi-volume-89797c9f-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137033454s
Feb  8 12:21:37.825: INFO: Pod "downwardapi-volume-89797c9f-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.15662921s
Feb  8 12:21:39.841: INFO: Pod "downwardapi-volume-89797c9f-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.173150843s
Feb  8 12:21:41.870: INFO: Pod "downwardapi-volume-89797c9f-4a6d-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.202352463s
STEP: Saw pod success
Feb  8 12:21:41.870: INFO: Pod "downwardapi-volume-89797c9f-4a6d-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:21:41.880: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-89797c9f-4a6d-11ea-95d6-0242ac110005 container client-container: 
STEP: delete the pod
Feb  8 12:21:43.455: INFO: Waiting for pod downwardapi-volume-89797c9f-4a6d-11ea-95d6-0242ac110005 to disappear
Feb  8 12:21:43.503: INFO: Pod downwardapi-volume-89797c9f-4a6d-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:21:43.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-pxh2k" for this suite.
Feb  8 12:21:49.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:21:49.940: INFO: namespace: e2e-tests-downward-api-pxh2k, resource: bindings, ignored listing per whitelist
Feb  8 12:21:49.966: INFO: namespace e2e-tests-downward-api-pxh2k deletion completed in 6.453365434s

• [SLOW TEST:19.667 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:21:49.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-952ccf9f-4a6d-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  8 12:21:50.339: INFO: Waiting up to 5m0s for pod "pod-secrets-952fcc71-4a6d-11ea-95d6-0242ac110005" in namespace "e2e-tests-secrets-fb5p7" to be "success or failure"
Feb  8 12:21:50.361: INFO: Pod "pod-secrets-952fcc71-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.759469ms
Feb  8 12:21:52.372: INFO: Pod "pod-secrets-952fcc71-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033305539s
Feb  8 12:21:54.389: INFO: Pod "pod-secrets-952fcc71-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050552287s
Feb  8 12:21:56.460: INFO: Pod "pod-secrets-952fcc71-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120953743s
Feb  8 12:21:58.495: INFO: Pod "pod-secrets-952fcc71-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156627106s
Feb  8 12:22:00.521: INFO: Pod "pod-secrets-952fcc71-4a6d-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.182337995s
STEP: Saw pod success
Feb  8 12:22:00.521: INFO: Pod "pod-secrets-952fcc71-4a6d-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:22:00.542: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-952fcc71-4a6d-11ea-95d6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  8 12:22:02.128: INFO: Waiting for pod pod-secrets-952fcc71-4a6d-11ea-95d6-0242ac110005 to disappear
Feb  8 12:22:02.166: INFO: Pod pod-secrets-952fcc71-4a6d-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:22:02.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-fb5p7" for this suite.
Feb  8 12:22:08.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:22:08.506: INFO: namespace: e2e-tests-secrets-fb5p7, resource: bindings, ignored listing per whitelist
Feb  8 12:22:08.681: INFO: namespace e2e-tests-secrets-fb5p7 deletion completed in 6.482519454s

• [SLOW TEST:18.714 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:22:08.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  8 12:22:08.937: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0468fd0-4a6d-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-spmsd" to be "success or failure"
Feb  8 12:22:08.943: INFO: Pod "downwardapi-volume-a0468fd0-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.79904ms
Feb  8 12:22:10.958: INFO: Pod "downwardapi-volume-a0468fd0-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021518519s
Feb  8 12:22:12.978: INFO: Pod "downwardapi-volume-a0468fd0-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041513364s
Feb  8 12:22:16.016: INFO: Pod "downwardapi-volume-a0468fd0-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.079100269s
Feb  8 12:22:18.032: INFO: Pod "downwardapi-volume-a0468fd0-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.095221784s
Feb  8 12:22:20.051: INFO: Pod "downwardapi-volume-a0468fd0-4a6d-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.114377318s
STEP: Saw pod success
Feb  8 12:22:20.051: INFO: Pod "downwardapi-volume-a0468fd0-4a6d-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:22:20.058: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a0468fd0-4a6d-11ea-95d6-0242ac110005 container client-container: 
STEP: delete the pod
Feb  8 12:22:20.622: INFO: Waiting for pod downwardapi-volume-a0468fd0-4a6d-11ea-95d6-0242ac110005 to disappear
Feb  8 12:22:20.755: INFO: Pod downwardapi-volume-a0468fd0-4a6d-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:22:20.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-spmsd" for this suite.
Feb  8 12:22:26.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:22:27.183: INFO: namespace: e2e-tests-projected-spmsd, resource: bindings, ignored listing per whitelist
Feb  8 12:22:27.195: INFO: namespace e2e-tests-projected-spmsd deletion completed in 6.398415539s

• [SLOW TEST:18.514 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:22:27.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-lzqk
STEP: Creating a pod to test atomic-volume-subpath
Feb  8 12:22:27.516: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-lzqk" in namespace "e2e-tests-subpath-sx55r" to be "success or failure"
Feb  8 12:22:27.531: INFO: Pod "pod-subpath-test-downwardapi-lzqk": Phase="Pending", Reason="", readiness=false. Elapsed: 14.555935ms
Feb  8 12:22:29.561: INFO: Pod "pod-subpath-test-downwardapi-lzqk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044514023s
Feb  8 12:22:31.572: INFO: Pod "pod-subpath-test-downwardapi-lzqk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055861572s
Feb  8 12:22:34.082: INFO: Pod "pod-subpath-test-downwardapi-lzqk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.566109888s
Feb  8 12:22:36.152: INFO: Pod "pod-subpath-test-downwardapi-lzqk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.635512895s
Feb  8 12:22:38.164: INFO: Pod "pod-subpath-test-downwardapi-lzqk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.648035519s
Feb  8 12:22:40.678: INFO: Pod "pod-subpath-test-downwardapi-lzqk": Phase="Pending", Reason="", readiness=false. Elapsed: 13.162141832s
Feb  8 12:22:42.716: INFO: Pod "pod-subpath-test-downwardapi-lzqk": Phase="Pending", Reason="", readiness=false. Elapsed: 15.199631273s
Feb  8 12:22:44.724: INFO: Pod "pod-subpath-test-downwardapi-lzqk": Phase="Running", Reason="", readiness=false. Elapsed: 17.207705427s
Feb  8 12:22:46.742: INFO: Pod "pod-subpath-test-downwardapi-lzqk": Phase="Running", Reason="", readiness=false. Elapsed: 19.226004984s
Feb  8 12:22:48.759: INFO: Pod "pod-subpath-test-downwardapi-lzqk": Phase="Running", Reason="", readiness=false. Elapsed: 21.24290023s
Feb  8 12:22:50.794: INFO: Pod "pod-subpath-test-downwardapi-lzqk": Phase="Running", Reason="", readiness=false. Elapsed: 23.2780539s
Feb  8 12:22:52.809: INFO: Pod "pod-subpath-test-downwardapi-lzqk": Phase="Running", Reason="", readiness=false. Elapsed: 25.292350464s
Feb  8 12:22:54.819: INFO: Pod "pod-subpath-test-downwardapi-lzqk": Phase="Running", Reason="", readiness=false. Elapsed: 27.302728959s
Feb  8 12:22:56.839: INFO: Pod "pod-subpath-test-downwardapi-lzqk": Phase="Running", Reason="", readiness=false. Elapsed: 29.322943162s
Feb  8 12:22:58.860: INFO: Pod "pod-subpath-test-downwardapi-lzqk": Phase="Running", Reason="", readiness=false. Elapsed: 31.343496265s
Feb  8 12:23:01.037: INFO: Pod "pod-subpath-test-downwardapi-lzqk": Phase="Running", Reason="", readiness=false. Elapsed: 33.520443671s
Feb  8 12:23:03.048: INFO: Pod "pod-subpath-test-downwardapi-lzqk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.531831648s
STEP: Saw pod success
Feb  8 12:23:03.048: INFO: Pod "pod-subpath-test-downwardapi-lzqk" satisfied condition "success or failure"
Feb  8 12:23:03.053: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-lzqk container test-container-subpath-downwardapi-lzqk: 
STEP: delete the pod
Feb  8 12:23:04.093: INFO: Waiting for pod pod-subpath-test-downwardapi-lzqk to disappear
Feb  8 12:23:04.109: INFO: Pod pod-subpath-test-downwardapi-lzqk no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-lzqk
Feb  8 12:23:04.109: INFO: Deleting pod "pod-subpath-test-downwardapi-lzqk" in namespace "e2e-tests-subpath-sx55r"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:23:04.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-sx55r" for this suite.
Feb  8 12:23:10.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:23:10.315: INFO: namespace: e2e-tests-subpath-sx55r, resource: bindings, ignored listing per whitelist
Feb  8 12:23:10.346: INFO: namespace e2e-tests-subpath-sx55r deletion completed in 6.218060249s

• [SLOW TEST:43.151 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:23:10.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-c50bfcfe-4a6d-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  8 12:23:10.859: INFO: Waiting up to 5m0s for pod "pod-configmaps-c50de6f1-4a6d-11ea-95d6-0242ac110005" in namespace "e2e-tests-configmap-gjw4s" to be "success or failure"
Feb  8 12:23:10.873: INFO: Pod "pod-configmaps-c50de6f1-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.246493ms
Feb  8 12:23:12.983: INFO: Pod "pod-configmaps-c50de6f1-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123788261s
Feb  8 12:23:15.004: INFO: Pod "pod-configmaps-c50de6f1-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144098229s
Feb  8 12:23:17.015: INFO: Pod "pod-configmaps-c50de6f1-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.155143305s
Feb  8 12:23:19.031: INFO: Pod "pod-configmaps-c50de6f1-4a6d-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.171378892s
Feb  8 12:23:21.046: INFO: Pod "pod-configmaps-c50de6f1-4a6d-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.18666539s
STEP: Saw pod success
Feb  8 12:23:21.046: INFO: Pod "pod-configmaps-c50de6f1-4a6d-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:23:21.059: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c50de6f1-4a6d-11ea-95d6-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  8 12:23:21.969: INFO: Waiting for pod pod-configmaps-c50de6f1-4a6d-11ea-95d6-0242ac110005 to disappear
Feb  8 12:23:22.195: INFO: Pod pod-configmaps-c50de6f1-4a6d-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:23:22.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-gjw4s" for this suite.
Feb  8 12:23:28.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:23:28.382: INFO: namespace: e2e-tests-configmap-gjw4s, resource: bindings, ignored listing per whitelist
Feb  8 12:23:28.633: INFO: namespace e2e-tests-configmap-gjw4s deletion completed in 6.419088307s

• [SLOW TEST:18.286 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:23:28.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-zm8x
STEP: Creating a pod to test atomic-volume-subpath
Feb  8 12:23:28.902: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zm8x" in namespace "e2e-tests-subpath-bl9qn" to be "success or failure"
Feb  8 12:23:28.956: INFO: Pod "pod-subpath-test-configmap-zm8x": Phase="Pending", Reason="", readiness=false. Elapsed: 54.676125ms
Feb  8 12:23:31.446: INFO: Pod "pod-subpath-test-configmap-zm8x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.544055401s
Feb  8 12:23:33.465: INFO: Pod "pod-subpath-test-configmap-zm8x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.563613754s
Feb  8 12:23:35.680: INFO: Pod "pod-subpath-test-configmap-zm8x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.778143059s
Feb  8 12:23:37.696: INFO: Pod "pod-subpath-test-configmap-zm8x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.794737085s
Feb  8 12:23:39.712: INFO: Pod "pod-subpath-test-configmap-zm8x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.810539258s
Feb  8 12:23:42.069: INFO: Pod "pod-subpath-test-configmap-zm8x": Phase="Pending", Reason="", readiness=false. Elapsed: 13.167066689s
Feb  8 12:23:44.095: INFO: Pod "pod-subpath-test-configmap-zm8x": Phase="Pending", Reason="", readiness=false. Elapsed: 15.193454989s
Feb  8 12:23:46.124: INFO: Pod "pod-subpath-test-configmap-zm8x": Phase="Running", Reason="", readiness=false. Elapsed: 17.221945303s
Feb  8 12:23:48.145: INFO: Pod "pod-subpath-test-configmap-zm8x": Phase="Running", Reason="", readiness=false. Elapsed: 19.243052992s
Feb  8 12:23:50.225: INFO: Pod "pod-subpath-test-configmap-zm8x": Phase="Running", Reason="", readiness=false. Elapsed: 21.3228793s
Feb  8 12:23:52.241: INFO: Pod "pod-subpath-test-configmap-zm8x": Phase="Running", Reason="", readiness=false. Elapsed: 23.339738442s
Feb  8 12:23:54.259: INFO: Pod "pod-subpath-test-configmap-zm8x": Phase="Running", Reason="", readiness=false. Elapsed: 25.356818123s
Feb  8 12:23:56.284: INFO: Pod "pod-subpath-test-configmap-zm8x": Phase="Running", Reason="", readiness=false. Elapsed: 27.382641162s
Feb  8 12:23:58.302: INFO: Pod "pod-subpath-test-configmap-zm8x": Phase="Running", Reason="", readiness=false. Elapsed: 29.400747353s
Feb  8 12:24:00.319: INFO: Pod "pod-subpath-test-configmap-zm8x": Phase="Running", Reason="", readiness=false. Elapsed: 31.41762357s
Feb  8 12:24:02.343: INFO: Pod "pod-subpath-test-configmap-zm8x": Phase="Running", Reason="", readiness=false. Elapsed: 33.441372105s
Feb  8 12:24:04.360: INFO: Pod "pod-subpath-test-configmap-zm8x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.458147091s
STEP: Saw pod success
Feb  8 12:24:04.360: INFO: Pod "pod-subpath-test-configmap-zm8x" satisfied condition "success or failure"
Feb  8 12:24:04.366: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-zm8x container test-container-subpath-configmap-zm8x: 
STEP: delete the pod
Feb  8 12:24:05.268: INFO: Waiting for pod pod-subpath-test-configmap-zm8x to disappear
Feb  8 12:24:05.584: INFO: Pod pod-subpath-test-configmap-zm8x no longer exists
STEP: Deleting pod pod-subpath-test-configmap-zm8x
Feb  8 12:24:05.584: INFO: Deleting pod "pod-subpath-test-configmap-zm8x" in namespace "e2e-tests-subpath-bl9qn"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:24:05.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-bl9qn" for this suite.
Feb  8 12:24:11.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:24:12.033: INFO: namespace: e2e-tests-subpath-bl9qn, resource: bindings, ignored listing per whitelist
Feb  8 12:24:12.058: INFO: namespace e2e-tests-subpath-bl9qn deletion completed in 6.418919852s

• [SLOW TEST:43.425 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:24:12.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:24:12.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-49p8p" for this suite.
Feb  8 12:24:18.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:24:18.457: INFO: namespace: e2e-tests-services-49p8p, resource: bindings, ignored listing per whitelist
Feb  8 12:24:18.622: INFO: namespace e2e-tests-services-49p8p deletion completed in 6.318398403s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.563 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:24:18.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  8 12:24:18.830: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb  8 12:24:18.849: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb  8 12:24:23.866: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  8 12:24:29.889: INFO: Creating deployment "test-rolling-update-deployment"
Feb  8 12:24:29.901: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb  8 12:24:29.914: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb  8 12:24:31.935: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb  8 12:24:31.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 12:24:33.960: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 12:24:36.170: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 12:24:37.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 12:24:39.993: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716761470, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 12:24:41.983: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  8 12:24:42.039: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-wvlpt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wvlpt/deployments/test-rolling-update-deployment,UID:f44ec33f-4a6d-11ea-a994-fa163e34d433,ResourceVersion:20976322,Generation:1,CreationTimestamp:2020-02-08 12:24:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-08 12:24:30 +0000 UTC 2020-02-08 12:24:30 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-08 12:24:40 +0000 UTC 2020-02-08 12:24:30 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  8 12:24:42.050: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-wvlpt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wvlpt/replicasets/test-rolling-update-deployment-75db98fb4c,UID:f4570df6-4a6d-11ea-a994-fa163e34d433,ResourceVersion:20976313,Generation:1,CreationTimestamp:2020-02-08 12:24:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment f44ec33f-4a6d-11ea-a994-fa163e34d433 0xc000ac7277 0xc000ac7278}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  8 12:24:42.050: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb  8 12:24:42.051: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-wvlpt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wvlpt/replicasets/test-rolling-update-controller,UID:edb73ac7-4a6d-11ea-a994-fa163e34d433,ResourceVersion:20976321,Generation:2,CreationTimestamp:2020-02-08 12:24:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment f44ec33f-4a6d-11ea-a994-fa163e34d433 0xc000ac70e7 0xc000ac70e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  8 12:24:42.059: INFO: Pod "test-rolling-update-deployment-75db98fb4c-mrzrq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-mrzrq,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-wvlpt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wvlpt/pods/test-rolling-update-deployment-75db98fb4c-mrzrq,UID:f46b1d9f-4a6d-11ea-a994-fa163e34d433,ResourceVersion:20976312,Generation:0,CreationTimestamp:2020-02-08 12:24:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c f4570df6-4a6d-11ea-a994-fa163e34d433 0xc000f316e7 0xc000f316e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-htwl6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-htwl6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-htwl6 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f31830} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f31850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 12:24:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 12:24:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 12:24:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 12:24:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-08 12:24:30 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-08 12:24:39 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://b2f188293d6e3c26b6242fdfbd415ff692eb393a57819be1220bca5fbd3ce56c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:24:42.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-wvlpt" for this suite.
Feb  8 12:24:50.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:24:50.441: INFO: namespace: e2e-tests-deployment-wvlpt, resource: bindings, ignored listing per whitelist
Feb  8 12:24:50.563: INFO: namespace e2e-tests-deployment-wvlpt deletion completed in 8.496409092s

• [SLOW TEST:31.941 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:24:50.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:25:02.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-q6xzt" for this suite.
Feb  8 12:25:47.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:25:47.267: INFO: namespace: e2e-tests-kubelet-test-q6xzt, resource: bindings, ignored listing per whitelist
Feb  8 12:25:47.325: INFO: namespace e2e-tests-kubelet-test-q6xzt deletion completed in 44.358269122s

• [SLOW TEST:56.760 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:25:47.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  8 12:25:47.637: INFO: Waiting up to 5m0s for pod "downwardapi-volume-22a288fd-4a6e-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-frlkj" to be "success or failure"
Feb  8 12:25:47.645: INFO: Pod "downwardapi-volume-22a288fd-4a6e-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.735049ms
Feb  8 12:25:49.699: INFO: Pod "downwardapi-volume-22a288fd-4a6e-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062469962s
Feb  8 12:25:51.776: INFO: Pod "downwardapi-volume-22a288fd-4a6e-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139707347s
Feb  8 12:25:54.115: INFO: Pod "downwardapi-volume-22a288fd-4a6e-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.477985845s
Feb  8 12:25:56.141: INFO: Pod "downwardapi-volume-22a288fd-4a6e-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.504546472s
Feb  8 12:25:58.163: INFO: Pod "downwardapi-volume-22a288fd-4a6e-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.526701367s
STEP: Saw pod success
Feb  8 12:25:58.164: INFO: Pod "downwardapi-volume-22a288fd-4a6e-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:25:58.171: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-22a288fd-4a6e-11ea-95d6-0242ac110005 container client-container: 
STEP: delete the pod
Feb  8 12:25:58.318: INFO: Waiting for pod downwardapi-volume-22a288fd-4a6e-11ea-95d6-0242ac110005 to disappear
Feb  8 12:25:58.367: INFO: Pod downwardapi-volume-22a288fd-4a6e-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:25:58.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-frlkj" for this suite.
Feb  8 12:26:04.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:26:04.822: INFO: namespace: e2e-tests-projected-frlkj, resource: bindings, ignored listing per whitelist
Feb  8 12:26:04.871: INFO: namespace e2e-tests-projected-frlkj deletion completed in 6.490940025s

• [SLOW TEST:17.546 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:26:04.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb  8 12:26:05.086: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fchnj,SelfLink:/api/v1/namespaces/e2e-tests-watch-fchnj/configmaps/e2e-watch-test-label-changed,UID:2d08367f-4a6e-11ea-a994-fa163e34d433,ResourceVersion:20976502,Generation:0,CreationTimestamp:2020-02-08 12:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  8 12:26:05.086: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fchnj,SelfLink:/api/v1/namespaces/e2e-tests-watch-fchnj/configmaps/e2e-watch-test-label-changed,UID:2d08367f-4a6e-11ea-a994-fa163e34d433,ResourceVersion:20976503,Generation:0,CreationTimestamp:2020-02-08 12:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  8 12:26:05.086: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fchnj,SelfLink:/api/v1/namespaces/e2e-tests-watch-fchnj/configmaps/e2e-watch-test-label-changed,UID:2d08367f-4a6e-11ea-a994-fa163e34d433,ResourceVersion:20976504,Generation:0,CreationTimestamp:2020-02-08 12:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb  8 12:26:15.174: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fchnj,SelfLink:/api/v1/namespaces/e2e-tests-watch-fchnj/configmaps/e2e-watch-test-label-changed,UID:2d08367f-4a6e-11ea-a994-fa163e34d433,ResourceVersion:20976518,Generation:0,CreationTimestamp:2020-02-08 12:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  8 12:26:15.174: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fchnj,SelfLink:/api/v1/namespaces/e2e-tests-watch-fchnj/configmaps/e2e-watch-test-label-changed,UID:2d08367f-4a6e-11ea-a994-fa163e34d433,ResourceVersion:20976519,Generation:0,CreationTimestamp:2020-02-08 12:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb  8 12:26:15.174: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fchnj,SelfLink:/api/v1/namespaces/e2e-tests-watch-fchnj/configmaps/e2e-watch-test-label-changed,UID:2d08367f-4a6e-11ea-a994-fa163e34d433,ResourceVersion:20976520,Generation:0,CreationTimestamp:2020-02-08 12:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:26:15.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-fchnj" for this suite.
Feb  8 12:26:21.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:26:21.345: INFO: namespace: e2e-tests-watch-fchnj, resource: bindings, ignored listing per whitelist
Feb  8 12:26:21.421: INFO: namespace e2e-tests-watch-fchnj deletion completed in 6.230093136s

• [SLOW TEST:16.549 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:26:21.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  8 12:26:41.959: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  8 12:26:41.978: INFO: Pod pod-with-poststart-http-hook still exists
Feb  8 12:26:43.979: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  8 12:26:44.018: INFO: Pod pod-with-poststart-http-hook still exists
Feb  8 12:26:45.979: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  8 12:26:46.027: INFO: Pod pod-with-poststart-http-hook still exists
Feb  8 12:26:47.981: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  8 12:26:48.001: INFO: Pod pod-with-poststart-http-hook still exists
Feb  8 12:26:49.978: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  8 12:26:49.999: INFO: Pod pod-with-poststart-http-hook still exists
Feb  8 12:26:51.978: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  8 12:26:52.005: INFO: Pod pod-with-poststart-http-hook still exists
Feb  8 12:26:53.978: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  8 12:26:53.996: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:26:53.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-9drvp" for this suite.
Feb  8 12:27:18.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:27:18.222: INFO: namespace: e2e-tests-container-lifecycle-hook-9drvp, resource: bindings, ignored listing per whitelist
Feb  8 12:27:18.246: INFO: namespace e2e-tests-container-lifecycle-hook-9drvp deletion completed in 24.24047137s

• [SLOW TEST:56.825 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:27:18.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  8 12:27:18.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-948ck'
Feb  8 12:27:20.236: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  8 12:27:20.236: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Feb  8 12:27:20.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-948ck'
Feb  8 12:27:20.425: INFO: stderr: ""
Feb  8 12:27:20.425: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:27:20.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-948ck" for this suite.
Feb  8 12:27:44.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:27:44.721: INFO: namespace: e2e-tests-kubectl-948ck, resource: bindings, ignored listing per whitelist
Feb  8 12:27:44.834: INFO: namespace e2e-tests-kubectl-948ck deletion completed in 24.26159192s

• [SLOW TEST:26.588 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:27:44.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Feb  8 12:27:45.070: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix322794797/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:27:45.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kmjhg" for this suite.
Feb  8 12:27:51.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:27:51.268: INFO: namespace: e2e-tests-kubectl-kmjhg, resource: bindings, ignored listing per whitelist
Feb  8 12:27:51.415: INFO: namespace e2e-tests-kubectl-kmjhg deletion completed in 6.250278523s

• [SLOW TEST:6.579 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:27:51.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb  8 12:27:51.656: INFO: namespace e2e-tests-kubectl-zqnn8
Feb  8 12:27:51.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zqnn8'
Feb  8 12:27:52.172: INFO: stderr: ""
Feb  8 12:27:52.172: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  8 12:27:53.183: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:27:53.183: INFO: Found 0 / 1
Feb  8 12:27:54.633: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:27:54.633: INFO: Found 0 / 1
Feb  8 12:27:55.191: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:27:55.191: INFO: Found 0 / 1
Feb  8 12:27:56.195: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:27:56.195: INFO: Found 0 / 1
Feb  8 12:27:57.916: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:27:57.916: INFO: Found 0 / 1
Feb  8 12:27:58.459: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:27:58.459: INFO: Found 0 / 1
Feb  8 12:27:59.395: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:27:59.395: INFO: Found 0 / 1
Feb  8 12:28:00.189: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:28:00.189: INFO: Found 0 / 1
Feb  8 12:28:01.224: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:28:01.224: INFO: Found 1 / 1
Feb  8 12:28:01.224: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  8 12:28:01.232: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:28:01.232: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  8 12:28:01.232: INFO: wait on redis-master startup in e2e-tests-kubectl-zqnn8 
Feb  8 12:28:01.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mnt4f redis-master --namespace=e2e-tests-kubectl-zqnn8'
Feb  8 12:28:01.506: INFO: stderr: ""
Feb  8 12:28:01.506: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 08 Feb 12:28:00.151 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 Feb 12:28:00.151 # Server started, Redis version 3.2.12\n1:M 08 Feb 12:28:00.151 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 Feb 12:28:00.151 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb  8 12:28:01.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-zqnn8'
Feb  8 12:28:01.853: INFO: stderr: ""
Feb  8 12:28:01.853: INFO: stdout: "service/rm2 exposed\n"
Feb  8 12:28:01.864: INFO: Service rm2 in namespace e2e-tests-kubectl-zqnn8 found.
STEP: exposing service
Feb  8 12:28:03.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-zqnn8'
Feb  8 12:28:04.170: INFO: stderr: ""
Feb  8 12:28:04.170: INFO: stdout: "service/rm3 exposed\n"
Feb  8 12:28:04.270: INFO: Service rm3 in namespace e2e-tests-kubectl-zqnn8 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:28:06.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zqnn8" for this suite.
Feb  8 12:28:30.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:28:30.433: INFO: namespace: e2e-tests-kubectl-zqnn8, resource: bindings, ignored listing per whitelist
Feb  8 12:28:30.600: INFO: namespace e2e-tests-kubectl-zqnn8 deletion completed in 24.302780323s

• [SLOW TEST:39.183 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:28:30.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  8 12:28:41.645: INFO: Successfully updated pod "pod-update-840f9938-4a6e-11ea-95d6-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Feb  8 12:28:41.740: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:28:41.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-5hwnz" for this suite.
Feb  8 12:29:03.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:29:04.019: INFO: namespace: e2e-tests-pods-5hwnz, resource: bindings, ignored listing per whitelist
Feb  8 12:29:04.132: INFO: namespace e2e-tests-pods-5hwnz deletion completed in 22.383377974s

• [SLOW TEST:33.532 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:29:04.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  8 12:32:05.821: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:05.896: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:07.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:07.924: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:09.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:09.925: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:11.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:11.929: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:13.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:13.935: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:15.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:15.914: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:17.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:17.919: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:19.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:19.926: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:21.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:21.919: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:23.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:23.914: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:25.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:25.919: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:27.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:27.915: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:29.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:29.917: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:31.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:31.921: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:33.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:33.923: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:35.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:35.911: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:37.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:37.917: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:39.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:39.913: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:41.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:41.919: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:43.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:43.914: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:45.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:45.909: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:47.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:47.913: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:49.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:49.916: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:51.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:51.915: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:53.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:53.946: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:55.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:55.921: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:57.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:57.912: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:32:59.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:32:59.923: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:01.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:01.917: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:03.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:03.919: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:05.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:05.921: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:07.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:07.913: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:09.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:09.911: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:11.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:11.919: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:13.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:13.932: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:15.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:15.919: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:17.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:17.913: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:19.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:19.911: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:21.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:21.914: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:23.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:23.911: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:25.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:25.912: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:27.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:27.948: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:29.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:29.908: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:31.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:31.923: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:33.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:33.918: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:35.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:35.914: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:37.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:37.916: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:39.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:39.927: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:41.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:41.919: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:43.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:43.931: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:45.897: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:45.925: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:47.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:47.914: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:49.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:49.915: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:51.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:51.911: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 12:33:53.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 12:33:53.913: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:33:53.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-95f2z" for this suite.
Feb  8 12:34:17.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:34:18.078: INFO: namespace: e2e-tests-container-lifecycle-hook-95f2z, resource: bindings, ignored listing per whitelist
Feb  8 12:34:18.124: INFO: namespace e2e-tests-container-lifecycle-hook-95f2z deletion completed in 24.199838405s

• [SLOW TEST:313.991 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:34:18.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Feb  8 12:34:28.308: INFO: Pod pod-hostip-52fe12fe-4a6f-11ea-95d6-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:34:28.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-qgkk6" for this suite.
Feb  8 12:34:46.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:34:46.393: INFO: namespace: e2e-tests-pods-qgkk6, resource: bindings, ignored listing per whitelist
Feb  8 12:34:46.548: INFO: namespace e2e-tests-pods-qgkk6 deletion completed in 18.232456558s

• [SLOW TEST:28.424 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:34:46.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  8 12:34:46.915: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Feb  8 12:34:47.014: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-gghzk/daemonsets","resourceVersion":"20977354"},"items":null}

Feb  8 12:34:47.036: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-gghzk/pods","resourceVersion":"20977354"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:34:47.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-gghzk" for this suite.
Feb  8 12:34:53.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:34:53.353: INFO: namespace: e2e-tests-daemonsets-gghzk, resource: bindings, ignored listing per whitelist
Feb  8 12:34:53.398: INFO: namespace e2e-tests-daemonsets-gghzk deletion completed in 6.297920322s

S [SKIPPING] [6.849 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Feb  8 12:34:46.915: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:34:53.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0208 12:35:24.223237       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  8 12:35:24.223: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:35:24.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-fb5n8" for this suite.
Feb  8 12:35:32.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:35:32.663: INFO: namespace: e2e-tests-gc-fb5n8, resource: bindings, ignored listing per whitelist
Feb  8 12:35:32.705: INFO: namespace e2e-tests-gc-fb5n8 deletion completed in 8.472918189s

• [SLOW TEST:39.307 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:35:32.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  8 12:35:33.039: INFO: Number of nodes with available pods: 0
Feb  8 12:35:33.039: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:35.261: INFO: Number of nodes with available pods: 0
Feb  8 12:35:35.261: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:36.065: INFO: Number of nodes with available pods: 0
Feb  8 12:35:36.065: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:37.081: INFO: Number of nodes with available pods: 0
Feb  8 12:35:37.081: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:38.066: INFO: Number of nodes with available pods: 0
Feb  8 12:35:38.066: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:39.071: INFO: Number of nodes with available pods: 0
Feb  8 12:35:39.071: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:40.063: INFO: Number of nodes with available pods: 0
Feb  8 12:35:40.063: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:41.317: INFO: Number of nodes with available pods: 0
Feb  8 12:35:41.317: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:42.279: INFO: Number of nodes with available pods: 0
Feb  8 12:35:42.279: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:43.059: INFO: Number of nodes with available pods: 0
Feb  8 12:35:43.059: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:44.134: INFO: Number of nodes with available pods: 0
Feb  8 12:35:44.134: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:45.067: INFO: Number of nodes with available pods: 0
Feb  8 12:35:45.067: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:46.064: INFO: Number of nodes with available pods: 1
Feb  8 12:35:46.064: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb  8 12:35:46.315: INFO: Number of nodes with available pods: 0
Feb  8 12:35:46.315: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:47.349: INFO: Number of nodes with available pods: 0
Feb  8 12:35:47.349: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:48.473: INFO: Number of nodes with available pods: 0
Feb  8 12:35:48.473: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:49.348: INFO: Number of nodes with available pods: 0
Feb  8 12:35:49.348: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:50.347: INFO: Number of nodes with available pods: 0
Feb  8 12:35:50.347: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:51.344: INFO: Number of nodes with available pods: 0
Feb  8 12:35:51.344: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:52.349: INFO: Number of nodes with available pods: 0
Feb  8 12:35:52.349: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:53.341: INFO: Number of nodes with available pods: 0
Feb  8 12:35:53.341: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:54.400: INFO: Number of nodes with available pods: 0
Feb  8 12:35:54.400: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:55.344: INFO: Number of nodes with available pods: 0
Feb  8 12:35:55.344: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:56.412: INFO: Number of nodes with available pods: 0
Feb  8 12:35:56.412: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:57.334: INFO: Number of nodes with available pods: 0
Feb  8 12:35:57.335: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:58.334: INFO: Number of nodes with available pods: 0
Feb  8 12:35:58.334: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:35:59.343: INFO: Number of nodes with available pods: 0
Feb  8 12:35:59.343: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:36:00.390: INFO: Number of nodes with available pods: 0
Feb  8 12:36:00.390: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:36:01.342: INFO: Number of nodes with available pods: 0
Feb  8 12:36:01.342: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:36:02.344: INFO: Number of nodes with available pods: 0
Feb  8 12:36:02.344: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:36:03.332: INFO: Number of nodes with available pods: 0
Feb  8 12:36:03.332: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:36:04.345: INFO: Number of nodes with available pods: 0
Feb  8 12:36:04.346: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:36:05.348: INFO: Number of nodes with available pods: 0
Feb  8 12:36:05.348: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:36:06.368: INFO: Number of nodes with available pods: 0
Feb  8 12:36:06.368: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:36:07.360: INFO: Number of nodes with available pods: 0
Feb  8 12:36:07.360: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:36:09.237: INFO: Number of nodes with available pods: 0
Feb  8 12:36:09.237: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:36:09.621: INFO: Number of nodes with available pods: 0
Feb  8 12:36:09.622: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:36:10.334: INFO: Number of nodes with available pods: 0
Feb  8 12:36:10.335: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:36:11.332: INFO: Number of nodes with available pods: 0
Feb  8 12:36:11.332: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 12:36:12.343: INFO: Number of nodes with available pods: 1
Feb  8 12:36:12.343: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-jm8q8, will wait for the garbage collector to delete the pods
Feb  8 12:36:12.416: INFO: Deleting DaemonSet.extensions daemon-set took: 15.46666ms
Feb  8 12:36:12.717: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.976015ms
Feb  8 12:36:22.633: INFO: Number of nodes with available pods: 0
Feb  8 12:36:22.633: INFO: Number of running nodes: 0, number of available pods: 0
Feb  8 12:36:22.638: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-jm8q8/daemonsets","resourceVersion":"20977571"},"items":null}

Feb  8 12:36:22.642: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-jm8q8/pods","resourceVersion":"20977571"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:36:22.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-jm8q8" for this suite.
Feb  8 12:36:28.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:36:28.934: INFO: namespace: e2e-tests-daemonsets-jm8q8, resource: bindings, ignored listing per whitelist
Feb  8 12:36:28.950: INFO: namespace e2e-tests-daemonsets-jm8q8 deletion completed in 6.294675298s

• [SLOW TEST:56.245 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:36:28.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:36:41.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-fptp6" for this suite.
Feb  8 12:36:47.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:36:47.706: INFO: namespace: e2e-tests-emptydir-wrapper-fptp6, resource: bindings, ignored listing per whitelist
Feb  8 12:36:47.758: INFO: namespace e2e-tests-emptydir-wrapper-fptp6 deletion completed in 6.209738482s

• [SLOW TEST:18.807 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:36:47.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  8 12:36:47.980: INFO: Waiting up to 5m0s for pod "pod-ac3c9d79-4a6f-11ea-95d6-0242ac110005" in namespace "e2e-tests-emptydir-txpzr" to be "success or failure"
Feb  8 12:36:48.082: INFO: Pod "pod-ac3c9d79-4a6f-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 101.359624ms
Feb  8 12:36:50.128: INFO: Pod "pod-ac3c9d79-4a6f-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1481252s
Feb  8 12:36:52.145: INFO: Pod "pod-ac3c9d79-4a6f-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165103851s
Feb  8 12:36:54.319: INFO: Pod "pod-ac3c9d79-4a6f-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.338561247s
Feb  8 12:36:56.336: INFO: Pod "pod-ac3c9d79-4a6f-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.355712435s
Feb  8 12:36:58.348: INFO: Pod "pod-ac3c9d79-4a6f-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.367895967s
STEP: Saw pod success
Feb  8 12:36:58.348: INFO: Pod "pod-ac3c9d79-4a6f-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:36:58.352: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ac3c9d79-4a6f-11ea-95d6-0242ac110005 container test-container: 
STEP: delete the pod
Feb  8 12:36:58.665: INFO: Waiting for pod pod-ac3c9d79-4a6f-11ea-95d6-0242ac110005 to disappear
Feb  8 12:36:58.673: INFO: Pod pod-ac3c9d79-4a6f-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:36:58.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-txpzr" for this suite.
Feb  8 12:37:04.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:37:05.072: INFO: namespace: e2e-tests-emptydir-txpzr, resource: bindings, ignored listing per whitelist
Feb  8 12:37:05.203: INFO: namespace e2e-tests-emptydir-txpzr deletion completed in 6.521468646s

• [SLOW TEST:17.445 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:37:05.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-plvdd
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Feb  8 12:37:05.478: INFO: Found 0 stateful pods, waiting for 3
Feb  8 12:37:15.488: INFO: Found 2 stateful pods, waiting for 3
Feb  8 12:37:25.700: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 12:37:25.700: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 12:37:25.700: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  8 12:37:35.495: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 12:37:35.495: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 12:37:35.495: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  8 12:37:35.547: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb  8 12:37:45.657: INFO: Updating stateful set ss2
Feb  8 12:37:45.683: INFO: Waiting for Pod e2e-tests-statefulset-plvdd/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 12:37:55.709: INFO: Waiting for Pod e2e-tests-statefulset-plvdd/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb  8 12:38:07.324: INFO: Found 2 stateful pods, waiting for 3
Feb  8 12:38:17.373: INFO: Found 2 stateful pods, waiting for 3
Feb  8 12:38:27.336: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 12:38:27.336: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 12:38:27.336: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb  8 12:38:27.385: INFO: Updating stateful set ss2
Feb  8 12:38:27.401: INFO: Waiting for Pod e2e-tests-statefulset-plvdd/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 12:38:37.484: INFO: Updating stateful set ss2
Feb  8 12:38:37.545: INFO: Waiting for StatefulSet e2e-tests-statefulset-plvdd/ss2 to complete update
Feb  8 12:38:37.545: INFO: Waiting for Pod e2e-tests-statefulset-plvdd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 12:38:47.618: INFO: Waiting for StatefulSet e2e-tests-statefulset-plvdd/ss2 to complete update
Feb  8 12:38:47.618: INFO: Waiting for Pod e2e-tests-statefulset-plvdd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 12:38:57.568: INFO: Waiting for StatefulSet e2e-tests-statefulset-plvdd/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  8 12:39:07.573: INFO: Deleting all statefulset in ns e2e-tests-statefulset-plvdd
Feb  8 12:39:07.580: INFO: Scaling statefulset ss2 to 0
Feb  8 12:39:47.651: INFO: Waiting for statefulset status.replicas updated to 0
Feb  8 12:39:47.665: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:39:47.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-plvdd" for this suite.
Feb  8 12:39:55.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:39:55.969: INFO: namespace: e2e-tests-statefulset-plvdd, resource: bindings, ignored listing per whitelist
Feb  8 12:39:56.041: INFO: namespace e2e-tests-statefulset-plvdd deletion completed in 8.320769003s

• [SLOW TEST:170.837 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:39:56.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-1c75a679-4a70-11ea-95d6-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-1c75a679-4a70-11ea-95d6-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:41:14.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-fpt55" for this suite.
Feb  8 12:41:38.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:41:38.710: INFO: namespace: e2e-tests-configmap-fpt55, resource: bindings, ignored listing per whitelist
Feb  8 12:41:38.769: INFO: namespace e2e-tests-configmap-fpt55 deletion completed in 24.290270124s

• [SLOW TEST:102.728 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:41:38.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Feb  8 12:41:38.999: INFO: Waiting up to 5m0s for pod "client-containers-59b06473-4a70-11ea-95d6-0242ac110005" in namespace "e2e-tests-containers-bx6ns" to be "success or failure"
Feb  8 12:41:39.071: INFO: Pod "client-containers-59b06473-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 71.984983ms
Feb  8 12:41:41.086: INFO: Pod "client-containers-59b06473-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086857878s
Feb  8 12:41:43.101: INFO: Pod "client-containers-59b06473-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102279833s
Feb  8 12:41:45.895: INFO: Pod "client-containers-59b06473-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.895767742s
Feb  8 12:41:47.941: INFO: Pod "client-containers-59b06473-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.941945533s
Feb  8 12:41:49.958: INFO: Pod "client-containers-59b06473-4a70-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.958827732s
STEP: Saw pod success
Feb  8 12:41:49.958: INFO: Pod "client-containers-59b06473-4a70-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:41:49.963: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-59b06473-4a70-11ea-95d6-0242ac110005 container test-container: 
STEP: delete the pod
Feb  8 12:41:51.723: INFO: Waiting for pod client-containers-59b06473-4a70-11ea-95d6-0242ac110005 to disappear
Feb  8 12:41:51.748: INFO: Pod client-containers-59b06473-4a70-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:41:51.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-bx6ns" for this suite.
Feb  8 12:41:57.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:41:57.986: INFO: namespace: e2e-tests-containers-bx6ns, resource: bindings, ignored listing per whitelist
Feb  8 12:41:58.029: INFO: namespace e2e-tests-containers-bx6ns deletion completed in 6.267948942s

• [SLOW TEST:19.260 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:41:58.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  8 12:41:58.428: INFO: Waiting up to 5m0s for pod "pod-65448253-4a70-11ea-95d6-0242ac110005" in namespace "e2e-tests-emptydir-dn7rp" to be "success or failure"
Feb  8 12:41:58.452: INFO: Pod "pod-65448253-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.343648ms
Feb  8 12:42:00.631: INFO: Pod "pod-65448253-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203823336s
Feb  8 12:42:02.672: INFO: Pod "pod-65448253-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.244350747s
Feb  8 12:42:04.699: INFO: Pod "pod-65448253-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.270942763s
Feb  8 12:42:06.777: INFO: Pod "pod-65448253-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.349398172s
Feb  8 12:42:08.789: INFO: Pod "pod-65448253-4a70-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.361430719s
STEP: Saw pod success
Feb  8 12:42:08.789: INFO: Pod "pod-65448253-4a70-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:42:08.793: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-65448253-4a70-11ea-95d6-0242ac110005 container test-container: 
STEP: delete the pod
Feb  8 12:42:08.849: INFO: Waiting for pod pod-65448253-4a70-11ea-95d6-0242ac110005 to disappear
Feb  8 12:42:08.868: INFO: Pod pod-65448253-4a70-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:42:08.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-dn7rp" for this suite.
Feb  8 12:42:14.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:42:14.961: INFO: namespace: e2e-tests-emptydir-dn7rp, resource: bindings, ignored listing per whitelist
Feb  8 12:42:15.012: INFO: namespace e2e-tests-emptydir-dn7rp deletion completed in 6.136623491s

• [SLOW TEST:16.982 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:42:15.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-6f3b930e-4a70-11ea-95d6-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:42:27.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-n6cmj" for this suite.
Feb  8 12:42:51.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:42:51.539: INFO: namespace: e2e-tests-configmap-n6cmj, resource: bindings, ignored listing per whitelist
Feb  8 12:42:51.608: INFO: namespace e2e-tests-configmap-n6cmj deletion completed in 24.226459662s

• [SLOW TEST:36.596 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:42:51.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Feb  8 12:42:51.915: INFO: Waiting up to 5m0s for pod "client-containers-8525f051-4a70-11ea-95d6-0242ac110005" in namespace "e2e-tests-containers-drtp4" to be "success or failure"
Feb  8 12:42:51.943: INFO: Pod "client-containers-8525f051-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.268309ms
Feb  8 12:42:53.965: INFO: Pod "client-containers-8525f051-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050070875s
Feb  8 12:42:55.989: INFO: Pod "client-containers-8525f051-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073639609s
Feb  8 12:42:58.104: INFO: Pod "client-containers-8525f051-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.189191019s
Feb  8 12:43:00.363: INFO: Pod "client-containers-8525f051-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.447968512s
Feb  8 12:43:02.378: INFO: Pod "client-containers-8525f051-4a70-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.463324317s
STEP: Saw pod success
Feb  8 12:43:02.379: INFO: Pod "client-containers-8525f051-4a70-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:43:02.384: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-8525f051-4a70-11ea-95d6-0242ac110005 container test-container: 
STEP: delete the pod
Feb  8 12:43:02.592: INFO: Waiting for pod client-containers-8525f051-4a70-11ea-95d6-0242ac110005 to disappear
Feb  8 12:43:02.745: INFO: Pod client-containers-8525f051-4a70-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:43:02.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-drtp4" for this suite.
Feb  8 12:43:10.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:43:10.891: INFO: namespace: e2e-tests-containers-drtp4, resource: bindings, ignored listing per whitelist
Feb  8 12:43:11.088: INFO: namespace e2e-tests-containers-drtp4 deletion completed in 8.319029275s

• [SLOW TEST:19.480 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:43:11.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Feb  8 12:43:11.268: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-tjnvc" to be "success or failure"
Feb  8 12:43:11.292: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 24.494421ms
Feb  8 12:43:13.386: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118101904s
Feb  8 12:43:15.399: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131157223s
Feb  8 12:43:18.040: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.77234057s
Feb  8 12:43:20.154: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.885882083s
Feb  8 12:43:22.190: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.922063682s
Feb  8 12:43:24.266: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 12.998349362s
Feb  8 12:43:26.319: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.051761683s
STEP: Saw pod success
Feb  8 12:43:26.320: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb  8 12:43:26.332: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb  8 12:43:26.517: INFO: Waiting for pod pod-host-path-test to disappear
Feb  8 12:43:26.555: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:43:26.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-tjnvc" for this suite.
Feb  8 12:43:32.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:43:32.901: INFO: namespace: e2e-tests-hostpath-tjnvc, resource: bindings, ignored listing per whitelist
Feb  8 12:43:32.990: INFO: namespace e2e-tests-hostpath-tjnvc deletion completed in 6.413421371s

• [SLOW TEST:21.901 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:43:32.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-9dc3723e-4a70-11ea-95d6-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-9dc371f9-4a70-11ea-95d6-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb  8 12:43:33.211: INFO: Waiting up to 5m0s for pod "projected-volume-9dc3714a-4a70-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-9rps2" to be "success or failure"
Feb  8 12:43:33.250: INFO: Pod "projected-volume-9dc3714a-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.448666ms
Feb  8 12:43:35.262: INFO: Pod "projected-volume-9dc3714a-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051128253s
Feb  8 12:43:37.294: INFO: Pod "projected-volume-9dc3714a-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082855802s
Feb  8 12:43:39.332: INFO: Pod "projected-volume-9dc3714a-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120818399s
Feb  8 12:43:41.392: INFO: Pod "projected-volume-9dc3714a-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.18128899s
Feb  8 12:43:43.402: INFO: Pod "projected-volume-9dc3714a-4a70-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.191708134s
STEP: Saw pod success
Feb  8 12:43:43.403: INFO: Pod "projected-volume-9dc3714a-4a70-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:43:43.408: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-9dc3714a-4a70-11ea-95d6-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Feb  8 12:43:43.468: INFO: Waiting for pod projected-volume-9dc3714a-4a70-11ea-95d6-0242ac110005 to disappear
Feb  8 12:43:43.486: INFO: Pod projected-volume-9dc3714a-4a70-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:43:43.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9rps2" for this suite.
Feb  8 12:43:49.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:43:49.805: INFO: namespace: e2e-tests-projected-9rps2, resource: bindings, ignored listing per whitelist
Feb  8 12:43:49.940: INFO: namespace e2e-tests-projected-9rps2 deletion completed in 6.447975102s

• [SLOW TEST:16.950 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:43:49.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-a7e0afb7-4a70-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  8 12:43:50.310: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a7e3289b-4a70-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-kvnjl" to be "success or failure"
Feb  8 12:43:50.324: INFO: Pod "pod-projected-configmaps-a7e3289b-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.545301ms
Feb  8 12:43:52.340: INFO: Pod "pod-projected-configmaps-a7e3289b-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030026946s
Feb  8 12:43:54.377: INFO: Pod "pod-projected-configmaps-a7e3289b-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067324616s
Feb  8 12:43:56.686: INFO: Pod "pod-projected-configmaps-a7e3289b-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.375956833s
Feb  8 12:43:58.704: INFO: Pod "pod-projected-configmaps-a7e3289b-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.39419598s
Feb  8 12:44:00.760: INFO: Pod "pod-projected-configmaps-a7e3289b-4a70-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.450299055s
STEP: Saw pod success
Feb  8 12:44:00.761: INFO: Pod "pod-projected-configmaps-a7e3289b-4a70-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:44:00.768: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-a7e3289b-4a70-11ea-95d6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  8 12:44:01.187: INFO: Waiting for pod pod-projected-configmaps-a7e3289b-4a70-11ea-95d6-0242ac110005 to disappear
Feb  8 12:44:01.198: INFO: Pod pod-projected-configmaps-a7e3289b-4a70-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:44:01.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kvnjl" for this suite.
Feb  8 12:44:07.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:44:07.288: INFO: namespace: e2e-tests-projected-kvnjl, resource: bindings, ignored listing per whitelist
Feb  8 12:44:07.411: INFO: namespace e2e-tests-projected-kvnjl deletion completed in 6.196031018s

• [SLOW TEST:17.469 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:44:07.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-b251b1dc-4a70-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  8 12:44:07.700: INFO: Waiting up to 5m0s for pod "pod-secrets-b252f94a-4a70-11ea-95d6-0242ac110005" in namespace "e2e-tests-secrets-fzl7k" to be "success or failure"
Feb  8 12:44:07.714: INFO: Pod "pod-secrets-b252f94a-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.173515ms
Feb  8 12:44:09.826: INFO: Pod "pod-secrets-b252f94a-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126329874s
Feb  8 12:44:11.856: INFO: Pod "pod-secrets-b252f94a-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156053492s
Feb  8 12:44:13.871: INFO: Pod "pod-secrets-b252f94a-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.171603739s
Feb  8 12:44:15.931: INFO: Pod "pod-secrets-b252f94a-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.231187793s
Feb  8 12:44:18.086: INFO: Pod "pod-secrets-b252f94a-4a70-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.386421837s
STEP: Saw pod success
Feb  8 12:44:18.086: INFO: Pod "pod-secrets-b252f94a-4a70-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:44:18.094: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b252f94a-4a70-11ea-95d6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  8 12:44:18.160: INFO: Waiting for pod pod-secrets-b252f94a-4a70-11ea-95d6-0242ac110005 to disappear
Feb  8 12:44:18.213: INFO: Pod pod-secrets-b252f94a-4a70-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:44:18.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-fzl7k" for this suite.
Feb  8 12:44:24.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:44:24.428: INFO: namespace: e2e-tests-secrets-fzl7k, resource: bindings, ignored listing per whitelist
Feb  8 12:44:24.441: INFO: namespace e2e-tests-secrets-fzl7k deletion completed in 6.218464633s

• [SLOW TEST:17.030 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:44:24.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-bc7425eb-4a70-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  8 12:44:24.836: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bc8831d7-4a70-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-k5cdt" to be "success or failure"
Feb  8 12:44:24.858: INFO: Pod "pod-projected-secrets-bc8831d7-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.004821ms
Feb  8 12:44:26.889: INFO: Pod "pod-projected-secrets-bc8831d7-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052305861s
Feb  8 12:44:28.909: INFO: Pod "pod-projected-secrets-bc8831d7-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072249616s
Feb  8 12:44:31.039: INFO: Pod "pod-projected-secrets-bc8831d7-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.202161003s
Feb  8 12:44:33.050: INFO: Pod "pod-projected-secrets-bc8831d7-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.214131252s
Feb  8 12:44:35.116: INFO: Pod "pod-projected-secrets-bc8831d7-4a70-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.279430186s
STEP: Saw pod success
Feb  8 12:44:35.116: INFO: Pod "pod-projected-secrets-bc8831d7-4a70-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:44:35.160: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-bc8831d7-4a70-11ea-95d6-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  8 12:44:35.385: INFO: Waiting for pod pod-projected-secrets-bc8831d7-4a70-11ea-95d6-0242ac110005 to disappear
Feb  8 12:44:35.457: INFO: Pod pod-projected-secrets-bc8831d7-4a70-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:44:35.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-k5cdt" for this suite.
Feb  8 12:44:41.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:44:41.710: INFO: namespace: e2e-tests-projected-k5cdt, resource: bindings, ignored listing per whitelist
Feb  8 12:44:41.743: INFO: namespace e2e-tests-projected-k5cdt deletion completed in 6.255884598s

• [SLOW TEST:17.301 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:44:41.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Feb  8 12:44:41.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:44:44.663: INFO: stderr: ""
Feb  8 12:44:44.663: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  8 12:44:44.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:44:44.876: INFO: stderr: ""
Feb  8 12:44:44.876: INFO: stdout: "update-demo-nautilus-8pvkw update-demo-nautilus-gvw6j "
Feb  8 12:44:44.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8pvkw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:44:45.189: INFO: stderr: ""
Feb  8 12:44:45.189: INFO: stdout: ""
Feb  8 12:44:45.190: INFO: update-demo-nautilus-8pvkw is created but not running
Feb  8 12:44:50.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:44:50.385: INFO: stderr: ""
Feb  8 12:44:50.385: INFO: stdout: "update-demo-nautilus-8pvkw update-demo-nautilus-gvw6j "
Feb  8 12:44:50.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8pvkw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:44:50.545: INFO: stderr: ""
Feb  8 12:44:50.545: INFO: stdout: ""
Feb  8 12:44:50.545: INFO: update-demo-nautilus-8pvkw is created but not running
Feb  8 12:44:55.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:44:55.724: INFO: stderr: ""
Feb  8 12:44:55.724: INFO: stdout: "update-demo-nautilus-8pvkw update-demo-nautilus-gvw6j "
Feb  8 12:44:55.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8pvkw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:44:55.894: INFO: stderr: ""
Feb  8 12:44:55.894: INFO: stdout: "true"
Feb  8 12:44:55.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8pvkw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:44:56.022: INFO: stderr: ""
Feb  8 12:44:56.023: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  8 12:44:56.023: INFO: validating pod update-demo-nautilus-8pvkw
Feb  8 12:44:56.031: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  8 12:44:56.031: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  8 12:44:56.031: INFO: update-demo-nautilus-8pvkw is verified up and running
Feb  8 12:44:56.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gvw6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:44:56.137: INFO: stderr: ""
Feb  8 12:44:56.137: INFO: stdout: ""
Feb  8 12:44:56.137: INFO: update-demo-nautilus-gvw6j is created but not running
Feb  8 12:45:01.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:45:01.352: INFO: stderr: ""
Feb  8 12:45:01.353: INFO: stdout: "update-demo-nautilus-8pvkw update-demo-nautilus-gvw6j "
Feb  8 12:45:01.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8pvkw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:45:01.542: INFO: stderr: ""
Feb  8 12:45:01.542: INFO: stdout: "true"
Feb  8 12:45:01.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8pvkw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:45:01.664: INFO: stderr: ""
Feb  8 12:45:01.664: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  8 12:45:01.664: INFO: validating pod update-demo-nautilus-8pvkw
Feb  8 12:45:01.681: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  8 12:45:01.681: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  8 12:45:01.681: INFO: update-demo-nautilus-8pvkw is verified up and running
Feb  8 12:45:01.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gvw6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:45:01.785: INFO: stderr: ""
Feb  8 12:45:01.785: INFO: stdout: "true"
Feb  8 12:45:01.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gvw6j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:45:01.947: INFO: stderr: ""
Feb  8 12:45:01.947: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  8 12:45:01.947: INFO: validating pod update-demo-nautilus-gvw6j
Feb  8 12:45:01.968: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  8 12:45:01.968: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  8 12:45:01.968: INFO: update-demo-nautilus-gvw6j is verified up and running
STEP: rolling-update to new replication controller
Feb  8 12:45:01.971: INFO: scanned /root for discovery docs: 
Feb  8 12:45:01.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:45:34.396: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  8 12:45:34.396: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  8 12:45:34.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:45:34.608: INFO: stderr: ""
Feb  8 12:45:34.608: INFO: stdout: "update-demo-kitten-lqbcs update-demo-kitten-spdq4 "
Feb  8 12:45:34.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lqbcs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:45:34.796: INFO: stderr: ""
Feb  8 12:45:34.796: INFO: stdout: "true"
Feb  8 12:45:34.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lqbcs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:45:35.003: INFO: stderr: ""
Feb  8 12:45:35.003: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  8 12:45:35.003: INFO: validating pod update-demo-kitten-lqbcs
Feb  8 12:45:35.062: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  8 12:45:35.063: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  8 12:45:35.063: INFO: update-demo-kitten-lqbcs is verified up and running
Feb  8 12:45:35.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-spdq4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:45:35.218: INFO: stderr: ""
Feb  8 12:45:35.218: INFO: stdout: "true"
Feb  8 12:45:35.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-spdq4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gnfjr'
Feb  8 12:45:35.318: INFO: stderr: ""
Feb  8 12:45:35.318: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  8 12:45:35.318: INFO: validating pod update-demo-kitten-spdq4
Feb  8 12:45:35.332: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  8 12:45:35.332: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  8 12:45:35.332: INFO: update-demo-kitten-spdq4 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:45:35.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-gnfjr" for this suite.
Feb  8 12:46:01.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:46:01.542: INFO: namespace: e2e-tests-kubectl-gnfjr, resource: bindings, ignored listing per whitelist
Feb  8 12:46:01.558: INFO: namespace e2e-tests-kubectl-gnfjr deletion completed in 26.220643552s

• [SLOW TEST:79.815 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:46:01.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  8 12:46:02.088: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f682190f-4a70-11ea-95d6-0242ac110005" in namespace "e2e-tests-downward-api-cvpwb" to be "success or failure"
Feb  8 12:46:02.098: INFO: Pod "downwardapi-volume-f682190f-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.413415ms
Feb  8 12:46:04.150: INFO: Pod "downwardapi-volume-f682190f-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061933535s
Feb  8 12:46:06.168: INFO: Pod "downwardapi-volume-f682190f-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080341177s
Feb  8 12:46:08.674: INFO: Pod "downwardapi-volume-f682190f-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.586129167s
Feb  8 12:46:10.708: INFO: Pod "downwardapi-volume-f682190f-4a70-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.61951785s
Feb  8 12:46:12.722: INFO: Pod "downwardapi-volume-f682190f-4a70-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.633480232s
STEP: Saw pod success
Feb  8 12:46:12.722: INFO: Pod "downwardapi-volume-f682190f-4a70-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:46:12.729: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f682190f-4a70-11ea-95d6-0242ac110005 container client-container: 
STEP: delete the pod
Feb  8 12:46:12.863: INFO: Waiting for pod downwardapi-volume-f682190f-4a70-11ea-95d6-0242ac110005 to disappear
Feb  8 12:46:12.877: INFO: Pod downwardapi-volume-f682190f-4a70-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:46:12.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-cvpwb" for this suite.
Feb  8 12:46:18.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:46:19.124: INFO: namespace: e2e-tests-downward-api-cvpwb, resource: bindings, ignored listing per whitelist
Feb  8 12:46:19.158: INFO: namespace e2e-tests-downward-api-cvpwb deletion completed in 6.262714053s

• [SLOW TEST:17.600 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:46:19.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Feb  8 12:46:19.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb  8 12:46:19.435: INFO: stderr: ""
Feb  8 12:46:19.435: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:46:19.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nwk6x" for this suite.
Feb  8 12:46:25.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:46:25.636: INFO: namespace: e2e-tests-kubectl-nwk6x, resource: bindings, ignored listing per whitelist
Feb  8 12:46:25.662: INFO: namespace e2e-tests-kubectl-nwk6x deletion completed in 6.215505679s

• [SLOW TEST:6.503 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:46:25.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:46:26.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-bql2f" for this suite.
Feb  8 12:46:50.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:46:50.274: INFO: namespace: e2e-tests-pods-bql2f, resource: bindings, ignored listing per whitelist
Feb  8 12:46:50.348: INFO: namespace e2e-tests-pods-bql2f deletion completed in 24.306023703s

• [SLOW TEST:24.686 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:46:50.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-cc5kb
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  8 12:46:50.553: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  8 12:47:26.741: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-cc5kb PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 12:47:26.741: INFO: >>> kubeConfig: /root/.kube/config
I0208 12:47:26.825296       8 log.go:172] (0xc00085c630) (0xc000e23360) Create stream
I0208 12:47:26.825359       8 log.go:172] (0xc00085c630) (0xc000e23360) Stream added, broadcasting: 1
I0208 12:47:26.830724       8 log.go:172] (0xc00085c630) Reply frame received for 1
I0208 12:47:26.830752       8 log.go:172] (0xc00085c630) (0xc0012b7e00) Create stream
I0208 12:47:26.830763       8 log.go:172] (0xc00085c630) (0xc0012b7e00) Stream added, broadcasting: 3
I0208 12:47:26.831515       8 log.go:172] (0xc00085c630) Reply frame received for 3
I0208 12:47:26.831536       8 log.go:172] (0xc00085c630) (0xc002578a00) Create stream
I0208 12:47:26.831541       8 log.go:172] (0xc00085c630) (0xc002578a00) Stream added, broadcasting: 5
I0208 12:47:26.832361       8 log.go:172] (0xc00085c630) Reply frame received for 5
I0208 12:47:27.023431       8 log.go:172] (0xc00085c630) Data frame received for 3
I0208 12:47:27.023516       8 log.go:172] (0xc0012b7e00) (3) Data frame handling
I0208 12:47:27.023551       8 log.go:172] (0xc0012b7e00) (3) Data frame sent
I0208 12:47:27.198155       8 log.go:172] (0xc00085c630) Data frame received for 1
I0208 12:47:27.198215       8 log.go:172] (0xc000e23360) (1) Data frame handling
I0208 12:47:27.198241       8 log.go:172] (0xc000e23360) (1) Data frame sent
I0208 12:47:27.198254       8 log.go:172] (0xc00085c630) (0xc000e23360) Stream removed, broadcasting: 1
I0208 12:47:27.198604       8 log.go:172] (0xc00085c630) (0xc0012b7e00) Stream removed, broadcasting: 3
I0208 12:47:27.198937       8 log.go:172] (0xc00085c630) (0xc002578a00) Stream removed, broadcasting: 5
I0208 12:47:27.198984       8 log.go:172] (0xc00085c630) (0xc000e23360) Stream removed, broadcasting: 1
I0208 12:47:27.198991       8 log.go:172] (0xc00085c630) (0xc0012b7e00) Stream removed, broadcasting: 3
I0208 12:47:27.198995       8 log.go:172] (0xc00085c630) (0xc002578a00) Stream removed, broadcasting: 5
I0208 12:47:27.199213       8 log.go:172] (0xc00085c630) Go away received
Feb  8 12:47:27.199: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:47:27.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-cc5kb" for this suite.
Feb  8 12:47:51.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:47:51.483: INFO: namespace: e2e-tests-pod-network-test-cc5kb, resource: bindings, ignored listing per whitelist
Feb  8 12:47:51.487: INFO: namespace e2e-tests-pod-network-test-cc5kb deletion completed in 24.270276619s

• [SLOW TEST:61.139 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:47:51.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Feb  8 12:47:51.737: INFO: Waiting up to 5m0s for pod "var-expansion-37db745e-4a71-11ea-95d6-0242ac110005" in namespace "e2e-tests-var-expansion-fv47c" to be "success or failure"
Feb  8 12:47:51.783: INFO: Pod "var-expansion-37db745e-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.7707ms
Feb  8 12:47:53.803: INFO: Pod "var-expansion-37db745e-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066026075s
Feb  8 12:47:55.843: INFO: Pod "var-expansion-37db745e-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106094049s
Feb  8 12:47:59.758: INFO: Pod "var-expansion-37db745e-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020934244s
Feb  8 12:48:01.819: INFO: Pod "var-expansion-37db745e-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.082135307s
Feb  8 12:48:03.855: INFO: Pod "var-expansion-37db745e-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.117613103s
Feb  8 12:48:05.901: INFO: Pod "var-expansion-37db745e-4a71-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.164062342s
STEP: Saw pod success
Feb  8 12:48:05.901: INFO: Pod "var-expansion-37db745e-4a71-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:48:05.937: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-37db745e-4a71-11ea-95d6-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  8 12:48:06.100: INFO: Waiting for pod var-expansion-37db745e-4a71-11ea-95d6-0242ac110005 to disappear
Feb  8 12:48:06.123: INFO: Pod var-expansion-37db745e-4a71-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:48:06.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-fv47c" for this suite.
Feb  8 12:48:12.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:48:12.311: INFO: namespace: e2e-tests-var-expansion-fv47c, resource: bindings, ignored listing per whitelist
Feb  8 12:48:12.370: INFO: namespace e2e-tests-var-expansion-fv47c deletion completed in 6.203609395s

• [SLOW TEST:20.882 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:48:12.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb  8 12:48:12.600: INFO: Pod name pod-release: Found 0 pods out of 1
Feb  8 12:48:17.617: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:48:19.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-f4vgp" for this suite.
Feb  8 12:48:32.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:48:32.703: INFO: namespace: e2e-tests-replication-controller-f4vgp, resource: bindings, ignored listing per whitelist
Feb  8 12:48:32.827: INFO: namespace e2e-tests-replication-controller-f4vgp deletion completed in 13.149803146s

• [SLOW TEST:20.457 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:48:32.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  8 12:48:33.438: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50b3e0ba-4a71-11ea-95d6-0242ac110005" in namespace "e2e-tests-downward-api-jbkkd" to be "success or failure"
Feb  8 12:48:33.466: INFO: Pod "downwardapi-volume-50b3e0ba-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.916396ms
Feb  8 12:48:35.477: INFO: Pod "downwardapi-volume-50b3e0ba-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038531071s
Feb  8 12:48:37.493: INFO: Pod "downwardapi-volume-50b3e0ba-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055076108s
Feb  8 12:48:39.509: INFO: Pod "downwardapi-volume-50b3e0ba-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070587162s
Feb  8 12:48:41.527: INFO: Pod "downwardapi-volume-50b3e0ba-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089286643s
Feb  8 12:48:43.564: INFO: Pod "downwardapi-volume-50b3e0ba-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.125345744s
Feb  8 12:48:45.774: INFO: Pod "downwardapi-volume-50b3e0ba-4a71-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.335666494s
STEP: Saw pod success
Feb  8 12:48:45.774: INFO: Pod "downwardapi-volume-50b3e0ba-4a71-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:48:45.780: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-50b3e0ba-4a71-11ea-95d6-0242ac110005 container client-container: 
STEP: delete the pod
Feb  8 12:48:46.178: INFO: Waiting for pod downwardapi-volume-50b3e0ba-4a71-11ea-95d6-0242ac110005 to disappear
Feb  8 12:48:46.288: INFO: Pod downwardapi-volume-50b3e0ba-4a71-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:48:46.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jbkkd" for this suite.
Feb  8 12:48:52.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:48:52.732: INFO: namespace: e2e-tests-downward-api-jbkkd, resource: bindings, ignored listing per whitelist
Feb  8 12:48:52.758: INFO: namespace e2e-tests-downward-api-jbkkd deletion completed in 6.450478568s

• [SLOW TEST:19.931 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:48:52.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:49:53.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-jx8lf" for this suite.
Feb  8 12:50:01.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:50:01.530: INFO: namespace: e2e-tests-container-runtime-jx8lf, resource: bindings, ignored listing per whitelist
Feb  8 12:50:01.713: INFO: namespace e2e-tests-container-runtime-jx8lf deletion completed in 8.322885283s

• [SLOW TEST:68.954 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:50:01.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb  8 12:50:01.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-r5l4d'
Feb  8 12:50:02.438: INFO: stderr: ""
Feb  8 12:50:02.439: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  8 12:50:03.453: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:50:03.453: INFO: Found 0 / 1
Feb  8 12:50:04.697: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:50:04.697: INFO: Found 0 / 1
Feb  8 12:50:05.451: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:50:05.451: INFO: Found 0 / 1
Feb  8 12:50:06.455: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:50:06.455: INFO: Found 0 / 1
Feb  8 12:50:07.654: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:50:07.654: INFO: Found 0 / 1
Feb  8 12:50:08.476: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:50:08.476: INFO: Found 0 / 1
Feb  8 12:50:09.457: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:50:09.457: INFO: Found 0 / 1
Feb  8 12:50:10.462: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:50:10.463: INFO: Found 0 / 1
Feb  8 12:50:11.457: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:50:11.457: INFO: Found 1 / 1
Feb  8 12:50:11.457: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb  8 12:50:11.469: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:50:11.469: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  8 12:50:11.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-kqbh6 --namespace=e2e-tests-kubectl-r5l4d -p {"metadata":{"annotations":{"x":"y"}}}'
Feb  8 12:50:11.708: INFO: stderr: ""
Feb  8 12:50:11.708: INFO: stdout: "pod/redis-master-kqbh6 patched\n"
STEP: checking annotations
Feb  8 12:50:11.716: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 12:50:11.716: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:50:11.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-r5l4d" for this suite.
Feb  8 12:50:35.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:50:35.933: INFO: namespace: e2e-tests-kubectl-r5l4d, resource: bindings, ignored listing per whitelist
Feb  8 12:50:35.949: INFO: namespace e2e-tests-kubectl-r5l4d deletion completed in 24.227221363s

• [SLOW TEST:34.235 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:50:35.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-2lwrd/secret-test-99f6fdc1-4a71-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  8 12:50:36.342: INFO: Waiting up to 5m0s for pod "pod-configmaps-99f9bb8d-4a71-11ea-95d6-0242ac110005" in namespace "e2e-tests-secrets-2lwrd" to be "success or failure"
Feb  8 12:50:36.368: INFO: Pod "pod-configmaps-99f9bb8d-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.26833ms
Feb  8 12:50:38.619: INFO: Pod "pod-configmaps-99f9bb8d-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.276410223s
Feb  8 12:50:41.227: INFO: Pod "pod-configmaps-99f9bb8d-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.8844606s
Feb  8 12:50:43.253: INFO: Pod "pod-configmaps-99f9bb8d-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.910603223s
Feb  8 12:50:45.273: INFO: Pod "pod-configmaps-99f9bb8d-4a71-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.930315792s
STEP: Saw pod success
Feb  8 12:50:45.273: INFO: Pod "pod-configmaps-99f9bb8d-4a71-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:50:45.282: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-99f9bb8d-4a71-11ea-95d6-0242ac110005 container env-test: 
STEP: delete the pod
Feb  8 12:50:46.792: INFO: Waiting for pod pod-configmaps-99f9bb8d-4a71-11ea-95d6-0242ac110005 to disappear
Feb  8 12:50:47.098: INFO: Pod pod-configmaps-99f9bb8d-4a71-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:50:47.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2lwrd" for this suite.
Feb  8 12:50:53.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:50:53.301: INFO: namespace: e2e-tests-secrets-2lwrd, resource: bindings, ignored listing per whitelist
Feb  8 12:50:53.437: INFO: namespace e2e-tests-secrets-2lwrd deletion completed in 6.325536027s

• [SLOW TEST:17.488 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:50:53.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  8 12:50:53.785: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a45c7c67-4a71-11ea-95d6-0242ac110005" in namespace "e2e-tests-downward-api-sn6nw" to be "success or failure"
Feb  8 12:50:53.884: INFO: Pod "downwardapi-volume-a45c7c67-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 99.17319ms
Feb  8 12:50:55.911: INFO: Pod "downwardapi-volume-a45c7c67-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126573567s
Feb  8 12:50:57.923: INFO: Pod "downwardapi-volume-a45c7c67-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1383342s
Feb  8 12:51:02.204: INFO: Pod "downwardapi-volume-a45c7c67-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.418729066s
Feb  8 12:51:04.225: INFO: Pod "downwardapi-volume-a45c7c67-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.440187282s
Feb  8 12:51:06.373: INFO: Pod "downwardapi-volume-a45c7c67-4a71-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.588411061s
STEP: Saw pod success
Feb  8 12:51:06.373: INFO: Pod "downwardapi-volume-a45c7c67-4a71-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:51:06.391: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a45c7c67-4a71-11ea-95d6-0242ac110005 container client-container: 
STEP: delete the pod
Feb  8 12:51:06.590: INFO: Waiting for pod downwardapi-volume-a45c7c67-4a71-11ea-95d6-0242ac110005 to disappear
Feb  8 12:51:06.601: INFO: Pod downwardapi-volume-a45c7c67-4a71-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:51:06.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-sn6nw" for this suite.
Feb  8 12:51:12.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:51:12.941: INFO: namespace: e2e-tests-downward-api-sn6nw, resource: bindings, ignored listing per whitelist
Feb  8 12:51:12.964: INFO: namespace e2e-tests-downward-api-sn6nw deletion completed in 6.351542007s

• [SLOW TEST:19.527 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:51:12.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-aff7362e-4a71-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  8 12:51:13.307: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aff88b02-4a71-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-trb8g" to be "success or failure"
Feb  8 12:51:13.359: INFO: Pod "pod-projected-configmaps-aff88b02-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 51.193331ms
Feb  8 12:51:15.733: INFO: Pod "pod-projected-configmaps-aff88b02-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.42485642s
Feb  8 12:51:17.752: INFO: Pod "pod-projected-configmaps-aff88b02-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.443690825s
Feb  8 12:51:19.966: INFO: Pod "pod-projected-configmaps-aff88b02-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.658384874s
Feb  8 12:51:22.388: INFO: Pod "pod-projected-configmaps-aff88b02-4a71-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.079975349s
Feb  8 12:51:24.596: INFO: Pod "pod-projected-configmaps-aff88b02-4a71-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.287937356s
STEP: Saw pod success
Feb  8 12:51:24.596: INFO: Pod "pod-projected-configmaps-aff88b02-4a71-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:51:24.610: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-aff88b02-4a71-11ea-95d6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  8 12:51:24.756: INFO: Waiting for pod pod-projected-configmaps-aff88b02-4a71-11ea-95d6-0242ac110005 to disappear
Feb  8 12:51:24.765: INFO: Pod pod-projected-configmaps-aff88b02-4a71-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:51:24.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-trb8g" for this suite.
Feb  8 12:51:30.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:51:31.041: INFO: namespace: e2e-tests-projected-trb8g, resource: bindings, ignored listing per whitelist
Feb  8 12:51:31.276: INFO: namespace e2e-tests-projected-trb8g deletion completed in 6.391808583s

• [SLOW TEST:18.312 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:51:31.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-4jr25
Feb  8 12:51:41.577: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-4jr25
STEP: checking the pod's current state and verifying that restartCount is present
Feb  8 12:51:41.581: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:55:43.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-4jr25" for this suite.
Feb  8 12:55:51.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:55:51.862: INFO: namespace: e2e-tests-container-probe-4jr25, resource: bindings, ignored listing per whitelist
Feb  8 12:55:51.867: INFO: namespace e2e-tests-container-probe-4jr25 deletion completed in 8.285816506s

• [SLOW TEST:260.590 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:55:51.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-56375a37-4a72-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  8 12:55:52.163: INFO: Waiting up to 5m0s for pod "pod-secrets-56380d4b-4a72-11ea-95d6-0242ac110005" in namespace "e2e-tests-secrets-vgrfk" to be "success or failure"
Feb  8 12:55:52.183: INFO: Pod "pod-secrets-56380d4b-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.061583ms
Feb  8 12:55:54.200: INFO: Pod "pod-secrets-56380d4b-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036896698s
Feb  8 12:55:56.221: INFO: Pod "pod-secrets-56380d4b-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058005278s
Feb  8 12:55:58.517: INFO: Pod "pod-secrets-56380d4b-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.353947732s
Feb  8 12:56:00.545: INFO: Pod "pod-secrets-56380d4b-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.38188886s
Feb  8 12:56:02.578: INFO: Pod "pod-secrets-56380d4b-4a72-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.415001305s
STEP: Saw pod success
Feb  8 12:56:02.578: INFO: Pod "pod-secrets-56380d4b-4a72-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:56:02.591: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-56380d4b-4a72-11ea-95d6-0242ac110005 container secret-env-test: 
STEP: delete the pod
Feb  8 12:56:03.505: INFO: Waiting for pod pod-secrets-56380d4b-4a72-11ea-95d6-0242ac110005 to disappear
Feb  8 12:56:03.533: INFO: Pod pod-secrets-56380d4b-4a72-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:56:03.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-vgrfk" for this suite.
Feb  8 12:56:09.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:56:09.884: INFO: namespace: e2e-tests-secrets-vgrfk, resource: bindings, ignored listing per whitelist
Feb  8 12:56:09.924: INFO: namespace e2e-tests-secrets-vgrfk deletion completed in 6.383086142s

• [SLOW TEST:18.056 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:56:09.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  8 12:56:10.130: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb  8 12:56:15.150: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  8 12:56:23.165: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb  8 12:56:25.178: INFO: Creating deployment "test-rollover-deployment"
Feb  8 12:56:25.209: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb  8 12:56:27.595: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb  8 12:56:27.613: INFO: Ensure that both replica sets have 1 created replica
Feb  8 12:56:27.923: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb  8 12:56:27.937: INFO: Updating deployment test-rollover-deployment
Feb  8 12:56:27.937: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb  8 12:56:30.249: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb  8 12:56:30.260: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb  8 12:56:30.269: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 12:56:30.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763388, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 12:56:32.619: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 12:56:32.620: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763388, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 12:56:34.284: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 12:56:34.284: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763388, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 12:56:36.286: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 12:56:36.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763388, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 12:56:38.282: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 12:56:38.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763388, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 12:56:40.289: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 12:56:40.289: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763388, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 12:56:42.300: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 12:56:42.300: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763400, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 12:56:45.282: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 12:56:45.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763400, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 12:56:46.293: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 12:56:46.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763400, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 12:56:48.292: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 12:56:48.292: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763400, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 12:56:50.292: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 12:56:50.292: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763400, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763385, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 12:56:52.637: INFO: 
Feb  8 12:56:52.637: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  8 12:56:52.672: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-k8wnd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k8wnd/deployments/test-rollover-deployment,UID:69e8bb69-4a72-11ea-a994-fa163e34d433,ResourceVersion:20980196,Generation:2,CreationTimestamp:2020-02-08 12:56:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-08 12:56:25 +0000 UTC 2020-02-08 12:56:25 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-08 12:56:51 +0000 UTC 2020-02-08 12:56:25 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  8 12:56:52.679: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-k8wnd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k8wnd/replicasets/test-rollover-deployment-5b8479fdb6,UID:6b8e76d2-4a72-11ea-a994-fa163e34d433,ResourceVersion:20980187,Generation:2,CreationTimestamp:2020-02-08 12:56:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 69e8bb69-4a72-11ea-a994-fa163e34d433 0xc000910bf7 0xc000910bf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  8 12:56:52.679: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb  8 12:56:52.679: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-k8wnd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k8wnd/replicasets/test-rollover-controller,UID:60e32a5b-4a72-11ea-a994-fa163e34d433,ResourceVersion:20980195,Generation:2,CreationTimestamp:2020-02-08 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 69e8bb69-4a72-11ea-a994-fa163e34d433 0xc00091061f 0xc000910630}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  8 12:56:52.680: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-k8wnd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k8wnd/replicasets/test-rollover-deployment-58494b7559,UID:69f7c8c6-4a72-11ea-a994-fa163e34d433,ResourceVersion:20980148,Generation:2,CreationTimestamp:2020-02-08 12:56:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 69e8bb69-4a72-11ea-a994-fa163e34d433 0xc000910a87 0xc000910a88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  8 12:56:52.691: INFO: Pod "test-rollover-deployment-5b8479fdb6-htsh6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-htsh6,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-k8wnd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-k8wnd/pods/test-rollover-deployment-5b8479fdb6-htsh6,UID:6bb6ff26-4a72-11ea-a994-fa163e34d433,ResourceVersion:20980172,Generation:0,CreationTimestamp:2020-02-08 12:56:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 6b8e76d2-4a72-11ea-a994-fa163e34d433 0xc001d76a17 0xc001d76a18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qzf99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzf99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-qzf99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d76b60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d76b80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 12:56:28 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 12:56:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 12:56:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 12:56:28 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-08 12:56:28 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-08 12:56:39 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://0a4b36b969afcc6f8bad32d7208be3a1232d882b243ea8b40324f1dd7551e12d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:56:52.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-k8wnd" for this suite.
Feb  8 12:57:05.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:57:05.661: INFO: namespace: e2e-tests-deployment-k8wnd, resource: bindings, ignored listing per whitelist
Feb  8 12:57:05.836: INFO: namespace e2e-tests-deployment-k8wnd deletion completed in 13.140272355s

• [SLOW TEST:55.912 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:57:05.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  8 12:57:06.087: INFO: Waiting up to 5m0s for pod "downward-api-8248a14f-4a72-11ea-95d6-0242ac110005" in namespace "e2e-tests-downward-api-lhv2f" to be "success or failure"
Feb  8 12:57:06.093: INFO: Pod "downward-api-8248a14f-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.790186ms
Feb  8 12:57:08.108: INFO: Pod "downward-api-8248a14f-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020960362s
Feb  8 12:57:10.126: INFO: Pod "downward-api-8248a14f-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038884629s
Feb  8 12:57:12.200: INFO: Pod "downward-api-8248a14f-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113231275s
Feb  8 12:57:14.223: INFO: Pod "downward-api-8248a14f-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.136195006s
Feb  8 12:57:16.489: INFO: Pod "downward-api-8248a14f-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.402351738s
Feb  8 12:57:19.442: INFO: Pod "downward-api-8248a14f-4a72-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.354981648s
STEP: Saw pod success
Feb  8 12:57:19.442: INFO: Pod "downward-api-8248a14f-4a72-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:57:19.481: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-8248a14f-4a72-11ea-95d6-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  8 12:57:19.947: INFO: Waiting for pod downward-api-8248a14f-4a72-11ea-95d6-0242ac110005 to disappear
Feb  8 12:57:19.957: INFO: Pod downward-api-8248a14f-4a72-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:57:19.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-lhv2f" for this suite.
Feb  8 12:57:25.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:57:26.031: INFO: namespace: e2e-tests-downward-api-lhv2f, resource: bindings, ignored listing per whitelist
Feb  8 12:57:26.154: INFO: namespace e2e-tests-downward-api-lhv2f deletion completed in 6.188751881s

• [SLOW TEST:20.318 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:57:26.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-8e6ffa5d-4a72-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  8 12:57:26.884: INFO: Waiting up to 5m0s for pod "pod-secrets-8e904e21-4a72-11ea-95d6-0242ac110005" in namespace "e2e-tests-secrets-tgdkc" to be "success or failure"
Feb  8 12:57:26.899: INFO: Pod "pod-secrets-8e904e21-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.279866ms
Feb  8 12:57:29.489: INFO: Pod "pod-secrets-8e904e21-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.604895007s
Feb  8 12:57:31.564: INFO: Pod "pod-secrets-8e904e21-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.67995704s
Feb  8 12:57:33.583: INFO: Pod "pod-secrets-8e904e21-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.69845693s
Feb  8 12:57:36.244: INFO: Pod "pod-secrets-8e904e21-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.359796615s
Feb  8 12:57:38.262: INFO: Pod "pod-secrets-8e904e21-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.377863479s
Feb  8 12:57:40.287: INFO: Pod "pod-secrets-8e904e21-4a72-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.402766245s
STEP: Saw pod success
Feb  8 12:57:40.287: INFO: Pod "pod-secrets-8e904e21-4a72-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:57:40.306: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-8e904e21-4a72-11ea-95d6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  8 12:57:40.834: INFO: Waiting for pod pod-secrets-8e904e21-4a72-11ea-95d6-0242ac110005 to disappear
Feb  8 12:57:40.845: INFO: Pod pod-secrets-8e904e21-4a72-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:57:40.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-tgdkc" for this suite.
Feb  8 12:57:47.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:57:47.128: INFO: namespace: e2e-tests-secrets-tgdkc, resource: bindings, ignored listing per whitelist
Feb  8 12:57:47.138: INFO: namespace e2e-tests-secrets-tgdkc deletion completed in 6.266391449s
STEP: Destroying namespace "e2e-tests-secret-namespace-hnpbs" for this suite.
Feb  8 12:57:53.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:57:53.332: INFO: namespace: e2e-tests-secret-namespace-hnpbs, resource: bindings, ignored listing per whitelist
Feb  8 12:57:53.384: INFO: namespace e2e-tests-secret-namespace-hnpbs deletion completed in 6.245679488s

• [SLOW TEST:27.229 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:57:53.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb  8 12:58:06.874: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:58:07.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-2smtn" for this suite.
Feb  8 12:58:38.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:58:38.495: INFO: namespace: e2e-tests-replicaset-2smtn, resource: bindings, ignored listing per whitelist
Feb  8 12:58:38.638: INFO: namespace e2e-tests-replicaset-2smtn deletion completed in 30.689528181s

• [SLOW TEST:45.254 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:58:38.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-b9b52878-4a72-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  8 12:58:39.100: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b9b7cf6f-4a72-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-4wc5t" to be "success or failure"
Feb  8 12:58:39.313: INFO: Pod "pod-projected-secrets-b9b7cf6f-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 212.554ms
Feb  8 12:58:41.910: INFO: Pod "pod-projected-secrets-b9b7cf6f-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.809332751s
Feb  8 12:58:43.926: INFO: Pod "pod-projected-secrets-b9b7cf6f-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.825877655s
Feb  8 12:58:47.366: INFO: Pod "pod-projected-secrets-b9b7cf6f-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.265328635s
Feb  8 12:58:49.384: INFO: Pod "pod-projected-secrets-b9b7cf6f-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.28338766s
Feb  8 12:58:51.431: INFO: Pod "pod-projected-secrets-b9b7cf6f-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.330330779s
Feb  8 12:58:53.573: INFO: Pod "pod-projected-secrets-b9b7cf6f-4a72-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.472798943s
STEP: Saw pod success
Feb  8 12:58:53.573: INFO: Pod "pod-projected-secrets-b9b7cf6f-4a72-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 12:58:53.610: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-b9b7cf6f-4a72-11ea-95d6-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  8 12:58:53.873: INFO: Waiting for pod pod-projected-secrets-b9b7cf6f-4a72-11ea-95d6-0242ac110005 to disappear
Feb  8 12:58:53.901: INFO: Pod pod-projected-secrets-b9b7cf6f-4a72-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:58:53.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4wc5t" for this suite.
Feb  8 12:59:00.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:59:00.245: INFO: namespace: e2e-tests-projected-4wc5t, resource: bindings, ignored listing per whitelist
Feb  8 12:59:00.281: INFO: namespace e2e-tests-projected-4wc5t deletion completed in 6.368599356s

• [SLOW TEST:21.643 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:59:00.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  8 12:59:00.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-dd6sd'
Feb  8 12:59:02.446: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  8 12:59:02.446: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb  8 12:59:02.571: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb  8 12:59:02.733: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb  8 12:59:02.788: INFO: scanned /root for discovery docs: 
Feb  8 12:59:02.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-dd6sd'
Feb  8 12:59:29.507: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  8 12:59:29.507: INFO: stdout: "Created e2e-test-nginx-rc-d42a1495bcce781eec21c399f548309d\nScaling up e2e-test-nginx-rc-d42a1495bcce781eec21c399f548309d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d42a1495bcce781eec21c399f548309d up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-d42a1495bcce781eec21c399f548309d to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb  8 12:59:29.507: INFO: stdout: "Created e2e-test-nginx-rc-d42a1495bcce781eec21c399f548309d\nScaling up e2e-test-nginx-rc-d42a1495bcce781eec21c399f548309d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d42a1495bcce781eec21c399f548309d up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-d42a1495bcce781eec21c399f548309d to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb  8 12:59:29.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-dd6sd'
Feb  8 12:59:29.682: INFO: stderr: ""
Feb  8 12:59:29.682: INFO: stdout: "e2e-test-nginx-rc-b9qfh e2e-test-nginx-rc-d42a1495bcce781eec21c399f548309d-5qs5k "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  8 12:59:34.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-dd6sd'
Feb  8 12:59:34.908: INFO: stderr: ""
Feb  8 12:59:34.908: INFO: stdout: "e2e-test-nginx-rc-d42a1495bcce781eec21c399f548309d-5qs5k "
Feb  8 12:59:34.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-d42a1495bcce781eec21c399f548309d-5qs5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dd6sd'
Feb  8 12:59:35.040: INFO: stderr: ""
Feb  8 12:59:35.041: INFO: stdout: "true"
Feb  8 12:59:35.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-d42a1495bcce781eec21c399f548309d-5qs5k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dd6sd'
Feb  8 12:59:35.160: INFO: stderr: ""
Feb  8 12:59:35.160: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb  8 12:59:35.161: INFO: e2e-test-nginx-rc-d42a1495bcce781eec21c399f548309d-5qs5k is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Feb  8 12:59:35.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-dd6sd'
Feb  8 12:59:35.366: INFO: stderr: ""
Feb  8 12:59:35.367: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 12:59:35.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dd6sd" for this suite.
Feb  8 12:59:59.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 12:59:59.707: INFO: namespace: e2e-tests-kubectl-dd6sd, resource: bindings, ignored listing per whitelist
Feb  8 12:59:59.761: INFO: namespace e2e-tests-kubectl-dd6sd deletion completed in 24.358182116s

• [SLOW TEST:59.480 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 12:59:59.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  8 12:59:59.961: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e9ebb83a-4a72-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-5mfld" to be "success or failure"
Feb  8 12:59:59.990: INFO: Pod "downwardapi-volume-e9ebb83a-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.126332ms
Feb  8 13:00:02.022: INFO: Pod "downwardapi-volume-e9ebb83a-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060413582s
Feb  8 13:00:04.054: INFO: Pod "downwardapi-volume-e9ebb83a-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092434711s
Feb  8 13:00:06.722: INFO: Pod "downwardapi-volume-e9ebb83a-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.760964092s
Feb  8 13:00:08.754: INFO: Pod "downwardapi-volume-e9ebb83a-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.792873872s
Feb  8 13:00:10.770: INFO: Pod "downwardapi-volume-e9ebb83a-4a72-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.808699051s
Feb  8 13:00:12.811: INFO: Pod "downwardapi-volume-e9ebb83a-4a72-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.850005845s
STEP: Saw pod success
Feb  8 13:00:12.812: INFO: Pod "downwardapi-volume-e9ebb83a-4a72-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 13:00:12.855: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e9ebb83a-4a72-11ea-95d6-0242ac110005 container client-container: 
STEP: delete the pod
Feb  8 13:00:12.950: INFO: Waiting for pod downwardapi-volume-e9ebb83a-4a72-11ea-95d6-0242ac110005 to disappear
Feb  8 13:00:12.962: INFO: Pod downwardapi-volume-e9ebb83a-4a72-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:00:12.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5mfld" for this suite.
Feb  8 13:00:19.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:00:19.297: INFO: namespace: e2e-tests-projected-5mfld, resource: bindings, ignored listing per whitelist
Feb  8 13:00:19.297: INFO: namespace e2e-tests-projected-5mfld deletion completed in 6.288087444s

• [SLOW TEST:19.535 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:00:19.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  8 13:00:19.800: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb  8 13:00:19.846: INFO: Number of nodes with available pods: 0
Feb  8 13:00:19.846: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 13:00:20.882: INFO: Number of nodes with available pods: 0
Feb  8 13:00:20.882: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 13:00:21.887: INFO: Number of nodes with available pods: 0
Feb  8 13:00:21.887: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 13:00:22.880: INFO: Number of nodes with available pods: 0
Feb  8 13:00:22.881: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 13:00:24.954: INFO: Number of nodes with available pods: 0
Feb  8 13:00:24.954: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 13:00:26.775: INFO: Number of nodes with available pods: 0
Feb  8 13:00:26.775: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 13:00:27.676: INFO: Number of nodes with available pods: 0
Feb  8 13:00:27.676: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 13:00:27.943: INFO: Number of nodes with available pods: 0
Feb  8 13:00:27.943: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 13:00:29.051: INFO: Number of nodes with available pods: 0
Feb  8 13:00:29.051: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 13:00:29.890: INFO: Number of nodes with available pods: 0
Feb  8 13:00:29.890: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 13:00:30.884: INFO: Number of nodes with available pods: 1
Feb  8 13:00:30.884: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb  8 13:00:31.091: INFO: Wrong image for pod: daemon-set-x6ltw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 13:00:32.143: INFO: Wrong image for pod: daemon-set-x6ltw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 13:00:33.920: INFO: Wrong image for pod: daemon-set-x6ltw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 13:00:34.204: INFO: Wrong image for pod: daemon-set-x6ltw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 13:00:35.141: INFO: Wrong image for pod: daemon-set-x6ltw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 13:00:36.164: INFO: Wrong image for pod: daemon-set-x6ltw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 13:00:37.143: INFO: Wrong image for pod: daemon-set-x6ltw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 13:00:38.186: INFO: Wrong image for pod: daemon-set-x6ltw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 13:00:38.186: INFO: Pod daemon-set-x6ltw is not available
Feb  8 13:00:39.146: INFO: Wrong image for pod: daemon-set-x6ltw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 13:00:39.146: INFO: Pod daemon-set-x6ltw is not available
Feb  8 13:00:40.148: INFO: Wrong image for pod: daemon-set-x6ltw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 13:00:40.148: INFO: Pod daemon-set-x6ltw is not available
Feb  8 13:00:41.153: INFO: Wrong image for pod: daemon-set-x6ltw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 13:00:41.154: INFO: Pod daemon-set-x6ltw is not available
Feb  8 13:00:42.147: INFO: Wrong image for pod: daemon-set-x6ltw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 13:00:42.147: INFO: Pod daemon-set-x6ltw is not available
Feb  8 13:00:43.175: INFO: Pod daemon-set-xhw95 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb  8 13:00:43.198: INFO: Number of nodes with available pods: 0
Feb  8 13:00:43.198: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 13:00:44.312: INFO: Number of nodes with available pods: 0
Feb  8 13:00:44.312: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 13:00:45.245: INFO: Number of nodes with available pods: 0
Feb  8 13:00:45.245: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 13:00:46.248: INFO: Number of nodes with available pods: 0
Feb  8 13:00:46.249: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 13:00:47.241: INFO: Number of nodes with available pods: 0
Feb  8 13:00:47.241: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 13:00:48.680: INFO: Number of nodes with available pods: 0
Feb  8 13:00:48.680: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 13:00:49.284: INFO: Number of nodes with available pods: 0
Feb  8 13:00:49.284: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 13:00:50.216: INFO: Number of nodes with available pods: 0
Feb  8 13:00:50.216: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 13:00:51.270: INFO: Number of nodes with available pods: 0
Feb  8 13:00:51.270: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  8 13:00:52.221: INFO: Number of nodes with available pods: 1
Feb  8 13:00:52.221: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-979mp, will wait for the garbage collector to delete the pods
Feb  8 13:00:52.312: INFO: Deleting DaemonSet.extensions daemon-set took: 12.373616ms
Feb  8 13:00:52.412: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.283919ms
Feb  8 13:01:03.521: INFO: Number of nodes with available pods: 0
Feb  8 13:01:03.521: INFO: Number of running nodes: 0, number of available pods: 0
Feb  8 13:01:03.526: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-979mp/daemonsets","resourceVersion":"20980793"},"items":null}

Feb  8 13:01:03.530: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-979mp/pods","resourceVersion":"20980793"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:01:03.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-979mp" for this suite.
Feb  8 13:01:09.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:01:10.302: INFO: namespace: e2e-tests-daemonsets-979mp, resource: bindings, ignored listing per whitelist
Feb  8 13:01:10.394: INFO: namespace e2e-tests-daemonsets-979mp deletion completed in 6.842564954s

• [SLOW TEST:51.098 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:01:10.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-1425061b-4a73-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  8 13:01:10.819: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-14265899-4a73-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-dh5hz" to be "success or failure"
Feb  8 13:01:10.840: INFO: Pod "pod-projected-configmaps-14265899-4a73-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.664704ms
Feb  8 13:01:13.235: INFO: Pod "pod-projected-configmaps-14265899-4a73-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.415997114s
Feb  8 13:01:15.471: INFO: Pod "pod-projected-configmaps-14265899-4a73-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.652746221s
Feb  8 13:01:17.492: INFO: Pod "pod-projected-configmaps-14265899-4a73-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.672976723s
Feb  8 13:01:20.292: INFO: Pod "pod-projected-configmaps-14265899-4a73-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.473492436s
Feb  8 13:01:22.329: INFO: Pod "pod-projected-configmaps-14265899-4a73-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.510352745s
Feb  8 13:01:24.351: INFO: Pod "pod-projected-configmaps-14265899-4a73-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.532566919s
Feb  8 13:01:26.364: INFO: Pod "pod-projected-configmaps-14265899-4a73-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.545790631s
STEP: Saw pod success
Feb  8 13:01:26.365: INFO: Pod "pod-projected-configmaps-14265899-4a73-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 13:01:26.371: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-14265899-4a73-11ea-95d6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  8 13:01:27.807: INFO: Waiting for pod pod-projected-configmaps-14265899-4a73-11ea-95d6-0242ac110005 to disappear
Feb  8 13:01:27.824: INFO: Pod pod-projected-configmaps-14265899-4a73-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:01:27.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dh5hz" for this suite.
Feb  8 13:01:35.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:01:35.984: INFO: namespace: e2e-tests-projected-dh5hz, resource: bindings, ignored listing per whitelist
Feb  8 13:01:36.028: INFO: namespace e2e-tests-projected-dh5hz deletion completed in 8.190473955s

• [SLOW TEST:25.632 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:01:36.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Feb  8 13:01:50.788: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-236d1bd4-4a73-11ea-95d6-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-z8q4w", SelfLink:"/api/v1/namespaces/e2e-tests-pods-z8q4w/pods/pod-submit-remove-236d1bd4-4a73-11ea-95d6-0242ac110005", UID:"237ca1fe-4a73-11ea-a994-fa163e34d433", ResourceVersion:"20980900", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716763696, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"426912774"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2n84s", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001ebefc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2n84s", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0029a02d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000db82a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0029a0310)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0029a0330)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0029a0338), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0029a033c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763697, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763708, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763708, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763696, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0018141c0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0018141e0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://baf0d22413faaea3845073e5aca1b249ace6cf70b9e699ba81045fe02392ce9a"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:01:59.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-z8q4w" for this suite.
Feb  8 13:02:05.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:02:05.905: INFO: namespace: e2e-tests-pods-z8q4w, resource: bindings, ignored listing per whitelist
Feb  8 13:02:06.009: INFO: namespace e2e-tests-pods-z8q4w deletion completed in 6.277799146s

• [SLOW TEST:29.981 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:02:06.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  8 13:02:06.561: INFO: Creating deployment "test-recreate-deployment"
Feb  8 13:02:06.657: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb  8 13:02:06.704: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Feb  8 13:02:08.722: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb  8 13:02:08.727: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763726, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763726, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763727, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763726, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 13:02:10.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763726, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763726, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763727, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763726, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 13:02:13.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763726, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763726, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763727, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763726, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 13:02:14.799: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763726, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763726, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763727, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763726, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 13:02:16.746: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763726, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763726, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763727, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716763726, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 13:02:18.742: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb  8 13:02:18.779: INFO: Updating deployment test-recreate-deployment
Feb  8 13:02:18.779: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  8 13:02:19.530: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-2ltgd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2ltgd/deployments/test-recreate-deployment,UID:3564bbcb-4a73-11ea-a994-fa163e34d433,ResourceVersion:20980993,Generation:2,CreationTimestamp:2020-02-08 13:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-08 13:02:19 +0000 UTC 2020-02-08 13:02:19 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-08 13:02:19 +0000 UTC 2020-02-08 13:02:06 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb  8 13:02:19.548: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-2ltgd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2ltgd/replicasets/test-recreate-deployment-589c4bfd,UID:3ce72553-4a73-11ea-a994-fa163e34d433,ResourceVersion:20980991,Generation:1,CreationTimestamp:2020-02-08 13:02:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3564bbcb-4a73-11ea-a994-fa163e34d433 0xc00268c3cf 0xc00268c3e0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  8 13:02:19.548: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb  8 13:02:19.549: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-2ltgd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2ltgd/replicasets/test-recreate-deployment-5bf7f65dc,UID:357bf0e7-4a73-11ea-a994-fa163e34d433,ResourceVersion:20980982,Generation:2,CreationTimestamp:2020-02-08 13:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3564bbcb-4a73-11ea-a994-fa163e34d433 0xc00268c4a0 0xc00268c4a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  8 13:02:19.595: INFO: Pod "test-recreate-deployment-589c4bfd-hx75f" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-hx75f,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-2ltgd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2ltgd/pods/test-recreate-deployment-589c4bfd-hx75f,UID:3cea680d-4a73-11ea-a994-fa163e34d433,ResourceVersion:20980995,Generation:0,CreationTimestamp:2020-02-08 13:02:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 3ce72553-4a73-11ea-a994-fa163e34d433 0xc00268cf6f 0xc00268cf80}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-llxxt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-llxxt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-llxxt true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00268d2e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00268d300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:02:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:02:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:02:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:02:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-08 13:02:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:02:19.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-2ltgd" for this suite.
Feb  8 13:02:30.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:02:30.688: INFO: namespace: e2e-tests-deployment-2ltgd, resource: bindings, ignored listing per whitelist
Feb  8 13:02:30.710: INFO: namespace e2e-tests-deployment-2ltgd deletion completed in 11.099359678s

• [SLOW TEST:24.700 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:02:30.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-43f9e70e-4a73-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  8 13:02:31.120: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-43fb1f03-4a73-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-982tk" to be "success or failure"
Feb  8 13:02:31.140: INFO: Pod "pod-projected-configmaps-43fb1f03-4a73-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.353317ms
Feb  8 13:02:33.300: INFO: Pod "pod-projected-configmaps-43fb1f03-4a73-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179896805s
Feb  8 13:02:35.319: INFO: Pod "pod-projected-configmaps-43fb1f03-4a73-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198796557s
Feb  8 13:02:37.700: INFO: Pod "pod-projected-configmaps-43fb1f03-4a73-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580261175s
Feb  8 13:02:39.776: INFO: Pod "pod-projected-configmaps-43fb1f03-4a73-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.655684107s
Feb  8 13:02:41.787: INFO: Pod "pod-projected-configmaps-43fb1f03-4a73-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.666790375s
Feb  8 13:02:43.802: INFO: Pod "pod-projected-configmaps-43fb1f03-4a73-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.681827981s
STEP: Saw pod success
Feb  8 13:02:43.802: INFO: Pod "pod-projected-configmaps-43fb1f03-4a73-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 13:02:43.809: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-43fb1f03-4a73-11ea-95d6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  8 13:02:44.422: INFO: Waiting for pod pod-projected-configmaps-43fb1f03-4a73-11ea-95d6-0242ac110005 to disappear
Feb  8 13:02:44.430: INFO: Pod pod-projected-configmaps-43fb1f03-4a73-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:02:44.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-982tk" for this suite.
Feb  8 13:02:50.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:02:50.974: INFO: namespace: e2e-tests-projected-982tk, resource: bindings, ignored listing per whitelist
Feb  8 13:02:51.099: INFO: namespace e2e-tests-projected-982tk deletion completed in 6.664019896s

• [SLOW TEST:20.389 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:02:51.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-gzps5
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-gzps5
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-gzps5
Feb  8 13:02:51.441: INFO: Found 0 stateful pods, waiting for 1
Feb  8 13:03:01.465: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Feb  8 13:03:11.464: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb  8 13:03:11.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gzps5 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  8 13:03:12.145: INFO: stderr: "I0208 13:03:11.741845    3753 log.go:172] (0xc0007c2420) (0xc000754640) Create stream\nI0208 13:03:11.742174    3753 log.go:172] (0xc0007c2420) (0xc000754640) Stream added, broadcasting: 1\nI0208 13:03:11.748952    3753 log.go:172] (0xc0007c2420) Reply frame received for 1\nI0208 13:03:11.749006    3753 log.go:172] (0xc0007c2420) (0xc0005f0dc0) Create stream\nI0208 13:03:11.749026    3753 log.go:172] (0xc0007c2420) (0xc0005f0dc0) Stream added, broadcasting: 3\nI0208 13:03:11.750609    3753 log.go:172] (0xc0007c2420) Reply frame received for 3\nI0208 13:03:11.750667    3753 log.go:172] (0xc0007c2420) (0xc00076a000) Create stream\nI0208 13:03:11.750708    3753 log.go:172] (0xc0007c2420) (0xc00076a000) Stream added, broadcasting: 5\nI0208 13:03:11.751471    3753 log.go:172] (0xc0007c2420) Reply frame received for 5\nI0208 13:03:11.967303    3753 log.go:172] (0xc0007c2420) Data frame received for 3\nI0208 13:03:11.967421    3753 log.go:172] (0xc0005f0dc0) (3) Data frame handling\nI0208 13:03:11.967451    3753 log.go:172] (0xc0005f0dc0) (3) Data frame sent\nI0208 13:03:12.127143    3753 log.go:172] (0xc0007c2420) Data frame received for 1\nI0208 13:03:12.127774    3753 log.go:172] (0xc0007c2420) (0xc0005f0dc0) Stream removed, broadcasting: 3\nI0208 13:03:12.127841    3753 log.go:172] (0xc000754640) (1) Data frame handling\nI0208 13:03:12.127888    3753 log.go:172] (0xc000754640) (1) Data frame sent\nI0208 13:03:12.127996    3753 log.go:172] (0xc0007c2420) (0xc00076a000) Stream removed, broadcasting: 5\nI0208 13:03:12.128028    3753 log.go:172] (0xc0007c2420) (0xc000754640) Stream removed, broadcasting: 1\nI0208 13:03:12.128065    3753 log.go:172] (0xc0007c2420) Go away received\nI0208 13:03:12.128766    3753 log.go:172] (0xc0007c2420) (0xc000754640) Stream removed, broadcasting: 1\nI0208 13:03:12.128837    3753 log.go:172] (0xc0007c2420) (0xc0005f0dc0) Stream removed, broadcasting: 3\nI0208 13:03:12.128871    3753 log.go:172] (0xc0007c2420) (0xc00076a000) Stream removed, broadcasting: 5\n"
Feb  8 13:03:12.145: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  8 13:03:12.145: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  8 13:03:12.180: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  8 13:03:22.195: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  8 13:03:22.195: INFO: Waiting for statefulset status.replicas updated to 0
Feb  8 13:03:22.238: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999453s
Feb  8 13:03:23.294: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.982560171s
Feb  8 13:03:24.307: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.926207373s
Feb  8 13:03:25.321: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.913240058s
Feb  8 13:03:26.372: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.899614487s
Feb  8 13:03:27.401: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.84823324s
Feb  8 13:03:28.430: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.819159403s
Feb  8 13:03:29.455: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.789906104s
Feb  8 13:03:30.485: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.765088775s
Feb  8 13:03:31.583: INFO: Verifying statefulset ss doesn't scale past 1 for another 735.723263ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-gzps5
Feb  8 13:03:33.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gzps5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  8 13:03:34.855: INFO: stderr: "I0208 13:03:34.356362    3776 log.go:172] (0xc000138e70) (0xc0001a57c0) Create stream\nI0208 13:03:34.363121    3776 log.go:172] (0xc000138e70) (0xc0001a57c0) Stream added, broadcasting: 1\nI0208 13:03:34.401372    3776 log.go:172] (0xc000138e70) Reply frame received for 1\nI0208 13:03:34.402056    3776 log.go:172] (0xc000138e70) (0xc0001a4b40) Create stream\nI0208 13:03:34.402396    3776 log.go:172] (0xc000138e70) (0xc0001a4b40) Stream added, broadcasting: 3\nI0208 13:03:34.417978    3776 log.go:172] (0xc000138e70) Reply frame received for 3\nI0208 13:03:34.418223    3776 log.go:172] (0xc000138e70) (0xc000818000) Create stream\nI0208 13:03:34.418271    3776 log.go:172] (0xc000138e70) (0xc000818000) Stream added, broadcasting: 5\nI0208 13:03:34.420587    3776 log.go:172] (0xc000138e70) Reply frame received for 5\nI0208 13:03:34.724460    3776 log.go:172] (0xc000138e70) Data frame received for 3\nI0208 13:03:34.724570    3776 log.go:172] (0xc0001a4b40) (3) Data frame handling\nI0208 13:03:34.724614    3776 log.go:172] (0xc0001a4b40) (3) Data frame sent\nI0208 13:03:34.841296    3776 log.go:172] (0xc000138e70) (0xc0001a4b40) Stream removed, broadcasting: 3\nI0208 13:03:34.841419    3776 log.go:172] (0xc000138e70) Data frame received for 1\nI0208 13:03:34.841445    3776 log.go:172] (0xc0001a57c0) (1) Data frame handling\nI0208 13:03:34.841481    3776 log.go:172] (0xc0001a57c0) (1) Data frame sent\nI0208 13:03:34.841502    3776 log.go:172] (0xc000138e70) (0xc000818000) Stream removed, broadcasting: 5\nI0208 13:03:34.841536    3776 log.go:172] (0xc000138e70) (0xc0001a57c0) Stream removed, broadcasting: 1\nI0208 13:03:34.841556    3776 log.go:172] (0xc000138e70) Go away received\nI0208 13:03:34.842079    3776 log.go:172] (0xc000138e70) (0xc0001a57c0) Stream removed, broadcasting: 1\nI0208 13:03:34.842097    3776 log.go:172] (0xc000138e70) (0xc0001a4b40) Stream removed, broadcasting: 3\nI0208 13:03:34.842101    3776 log.go:172] (0xc000138e70) (0xc000818000) Stream removed, broadcasting: 5\n"
Feb  8 13:03:34.855: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  8 13:03:34.855: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  8 13:03:34.873: INFO: Found 1 stateful pods, waiting for 3
Feb  8 13:03:45.060: INFO: Found 2 stateful pods, waiting for 3
Feb  8 13:03:54.894: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 13:03:54.894: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 13:03:54.894: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  8 13:04:04.887: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 13:04:04.887: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 13:04:04.887: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb  8 13:04:04.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gzps5 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  8 13:04:05.633: INFO: stderr: "I0208 13:04:05.074692    3798 log.go:172] (0xc0007020b0) (0xc00072e5a0) Create stream\nI0208 13:04:05.074995    3798 log.go:172] (0xc0007020b0) (0xc00072e5a0) Stream added, broadcasting: 1\nI0208 13:04:05.113894    3798 log.go:172] (0xc0007020b0) Reply frame received for 1\nI0208 13:04:05.114064    3798 log.go:172] (0xc0007020b0) (0xc00057ec80) Create stream\nI0208 13:04:05.114095    3798 log.go:172] (0xc0007020b0) (0xc00057ec80) Stream added, broadcasting: 3\nI0208 13:04:05.116369    3798 log.go:172] (0xc0007020b0) Reply frame received for 3\nI0208 13:04:05.116415    3798 log.go:172] (0xc0007020b0) (0xc00057edc0) Create stream\nI0208 13:04:05.116432    3798 log.go:172] (0xc0007020b0) (0xc00057edc0) Stream added, broadcasting: 5\nI0208 13:04:05.118081    3798 log.go:172] (0xc0007020b0) Reply frame received for 5\nI0208 13:04:05.332224    3798 log.go:172] (0xc0007020b0) Data frame received for 3\nI0208 13:04:05.332323    3798 log.go:172] (0xc00057ec80) (3) Data frame handling\nI0208 13:04:05.332368    3798 log.go:172] (0xc00057ec80) (3) Data frame sent\nI0208 13:04:05.610607    3798 log.go:172] (0xc0007020b0) Data frame received for 1\nI0208 13:04:05.610751    3798 log.go:172] (0xc0007020b0) (0xc00057ec80) Stream removed, broadcasting: 3\nI0208 13:04:05.610862    3798 log.go:172] (0xc00072e5a0) (1) Data frame handling\nI0208 13:04:05.610902    3798 log.go:172] (0xc00072e5a0) (1) Data frame sent\nI0208 13:04:05.611044    3798 log.go:172] (0xc0007020b0) (0xc00057edc0) Stream removed, broadcasting: 5\nI0208 13:04:05.611184    3798 log.go:172] (0xc0007020b0) (0xc00072e5a0) Stream removed, broadcasting: 1\nI0208 13:04:05.611249    3798 log.go:172] (0xc0007020b0) Go away received\nI0208 13:04:05.612253    3798 log.go:172] (0xc0007020b0) (0xc00072e5a0) Stream removed, broadcasting: 1\nI0208 13:04:05.612288    3798 log.go:172] (0xc0007020b0) (0xc00057ec80) Stream removed, broadcasting: 3\nI0208 13:04:05.612298    3798 log.go:172] (0xc0007020b0) (0xc00057edc0) Stream removed, broadcasting: 5\n"
Feb  8 13:04:05.633: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  8 13:04:05.633: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  8 13:04:05.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gzps5 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  8 13:04:06.277: INFO: stderr: "I0208 13:04:05.909157    3820 log.go:172] (0xc000138580) (0xc0005d52c0) Create stream\nI0208 13:04:05.909445    3820 log.go:172] (0xc000138580) (0xc0005d52c0) Stream added, broadcasting: 1\nI0208 13:04:05.915065    3820 log.go:172] (0xc000138580) Reply frame received for 1\nI0208 13:04:05.915119    3820 log.go:172] (0xc000138580) (0xc0004cc000) Create stream\nI0208 13:04:05.915133    3820 log.go:172] (0xc000138580) (0xc0004cc000) Stream added, broadcasting: 3\nI0208 13:04:05.921056    3820 log.go:172] (0xc000138580) Reply frame received for 3\nI0208 13:04:05.921079    3820 log.go:172] (0xc000138580) (0xc0004cc0a0) Create stream\nI0208 13:04:05.921100    3820 log.go:172] (0xc000138580) (0xc0004cc0a0) Stream added, broadcasting: 5\nI0208 13:04:05.923046    3820 log.go:172] (0xc000138580) Reply frame received for 5\nI0208 13:04:06.085544    3820 log.go:172] (0xc000138580) Data frame received for 3\nI0208 13:04:06.085646    3820 log.go:172] (0xc0004cc000) (3) Data frame handling\nI0208 13:04:06.085661    3820 log.go:172] (0xc0004cc000) (3) Data frame sent\nI0208 13:04:06.237228    3820 log.go:172] (0xc000138580) (0xc0004cc0a0) Stream removed, broadcasting: 5\nI0208 13:04:06.237460    3820 log.go:172] (0xc000138580) Data frame received for 1\nI0208 13:04:06.237486    3820 log.go:172] (0xc0005d52c0) (1) Data frame handling\nI0208 13:04:06.237525    3820 log.go:172] (0xc0005d52c0) (1) Data frame sent\nI0208 13:04:06.237668    3820 log.go:172] (0xc000138580) (0xc0005d52c0) Stream removed, broadcasting: 1\nI0208 13:04:06.238724    3820 log.go:172] (0xc000138580) (0xc0004cc000) Stream removed, broadcasting: 3\nI0208 13:04:06.238793    3820 log.go:172] (0xc000138580) Go away received\nI0208 13:04:06.238922    3820 log.go:172] (0xc000138580) (0xc0005d52c0) Stream removed, broadcasting: 1\nI0208 13:04:06.238942    3820 log.go:172] (0xc000138580) (0xc0004cc000) Stream removed, broadcasting: 3\nI0208 13:04:06.238957    3820 log.go:172] (0xc000138580) (0xc0004cc0a0) Stream removed, broadcasting: 5\n"
Feb  8 13:04:06.279: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  8 13:04:06.279: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  8 13:04:06.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gzps5 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  8 13:04:07.087: INFO: stderr: "I0208 13:04:06.603450    3841 log.go:172] (0xc0006bc370) (0xc000706640) Create stream\nI0208 13:04:06.603835    3841 log.go:172] (0xc0006bc370) (0xc000706640) Stream added, broadcasting: 1\nI0208 13:04:06.622071    3841 log.go:172] (0xc0006bc370) Reply frame received for 1\nI0208 13:04:06.622596    3841 log.go:172] (0xc0006bc370) (0xc00061cbe0) Create stream\nI0208 13:04:06.622667    3841 log.go:172] (0xc0006bc370) (0xc00061cbe0) Stream added, broadcasting: 3\nI0208 13:04:06.627576    3841 log.go:172] (0xc0006bc370) Reply frame received for 3\nI0208 13:04:06.627750    3841 log.go:172] (0xc0006bc370) (0xc000368000) Create stream\nI0208 13:04:06.627764    3841 log.go:172] (0xc0006bc370) (0xc000368000) Stream added, broadcasting: 5\nI0208 13:04:06.631640    3841 log.go:172] (0xc0006bc370) Reply frame received for 5\nI0208 13:04:06.922661    3841 log.go:172] (0xc0006bc370) Data frame received for 3\nI0208 13:04:06.922734    3841 log.go:172] (0xc00061cbe0) (3) Data frame handling\nI0208 13:04:06.922752    3841 log.go:172] (0xc00061cbe0) (3) Data frame sent\nI0208 13:04:07.075483    3841 log.go:172] (0xc0006bc370) (0xc00061cbe0) Stream removed, broadcasting: 3\nI0208 13:04:07.076022    3841 log.go:172] (0xc0006bc370) Data frame received for 1\nI0208 13:04:07.076044    3841 log.go:172] (0xc000706640) (1) Data frame handling\nI0208 13:04:07.076084    3841 log.go:172] (0xc000706640) (1) Data frame sent\nI0208 13:04:07.076131    3841 log.go:172] (0xc0006bc370) (0xc000706640) Stream removed, broadcasting: 1\nI0208 13:04:07.076267    3841 log.go:172] (0xc0006bc370) (0xc000368000) Stream removed, broadcasting: 5\nI0208 13:04:07.076473    3841 log.go:172] (0xc0006bc370) Go away received\nI0208 13:04:07.076851    3841 log.go:172] (0xc0006bc370) (0xc000706640) Stream removed, broadcasting: 1\nI0208 13:04:07.076895    3841 log.go:172] (0xc0006bc370) (0xc00061cbe0) Stream removed, broadcasting: 3\nI0208 13:04:07.076908    3841 log.go:172] (0xc0006bc370) (0xc000368000) Stream removed, broadcasting: 5\n"
Feb  8 13:04:07.087: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  8 13:04:07.087: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  8 13:04:07.087: INFO: Waiting for statefulset status.replicas updated to 0
Feb  8 13:04:07.099: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb  8 13:04:17.160: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  8 13:04:17.160: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  8 13:04:17.160: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  8 13:04:17.247: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999484s
Feb  8 13:04:18.264: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988094366s
Feb  8 13:04:19.285: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.971237692s
Feb  8 13:04:20.300: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.949654284s
Feb  8 13:04:21.313: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.93498259s
Feb  8 13:04:22.345: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.921441845s
Feb  8 13:04:23.359: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.889454032s
Feb  8 13:04:24.377: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.875627472s
Feb  8 13:04:25.402: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.857679408s
Feb  8 13:04:26.625: INFO: Verifying statefulset ss doesn't scale past 3 for another 833.186487ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-gzps5
Feb  8 13:04:27.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gzps5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  8 13:04:28.368: INFO: stderr: "I0208 13:04:28.003873    3862 log.go:172] (0xc00015c790) (0xc000605220) Create stream\nI0208 13:04:28.004149    3862 log.go:172] (0xc00015c790) (0xc000605220) Stream added, broadcasting: 1\nI0208 13:04:28.012816    3862 log.go:172] (0xc00015c790) Reply frame received for 1\nI0208 13:04:28.012880    3862 log.go:172] (0xc00015c790) (0xc000734000) Create stream\nI0208 13:04:28.012897    3862 log.go:172] (0xc00015c790) (0xc000734000) Stream added, broadcasting: 3\nI0208 13:04:28.014348    3862 log.go:172] (0xc00015c790) Reply frame received for 3\nI0208 13:04:28.014435    3862 log.go:172] (0xc00015c790) (0xc0005e0000) Create stream\nI0208 13:04:28.014455    3862 log.go:172] (0xc00015c790) (0xc0005e0000) Stream added, broadcasting: 5\nI0208 13:04:28.016746    3862 log.go:172] (0xc00015c790) Reply frame received for 5\nI0208 13:04:28.211824    3862 log.go:172] (0xc00015c790) Data frame received for 3\nI0208 13:04:28.211985    3862 log.go:172] (0xc000734000) (3) Data frame handling\nI0208 13:04:28.212073    3862 log.go:172] (0xc000734000) (3) Data frame sent\nI0208 13:04:28.349631    3862 log.go:172] (0xc00015c790) Data frame received for 1\nI0208 13:04:28.349893    3862 log.go:172] (0xc00015c790) (0xc000734000) Stream removed, broadcasting: 3\nI0208 13:04:28.350109    3862 log.go:172] (0xc000605220) (1) Data frame handling\nI0208 13:04:28.350244    3862 log.go:172] (0xc000605220) (1) Data frame sent\nI0208 13:04:28.350348    3862 log.go:172] (0xc00015c790) (0xc0005e0000) Stream removed, broadcasting: 5\nI0208 13:04:28.350397    3862 log.go:172] (0xc00015c790) (0xc000605220) Stream removed, broadcasting: 1\nI0208 13:04:28.350428    3862 log.go:172] (0xc00015c790) Go away received\nI0208 13:04:28.351306    3862 log.go:172] (0xc00015c790) (0xc000605220) Stream removed, broadcasting: 1\nI0208 13:04:28.351388    3862 log.go:172] (0xc00015c790) (0xc000734000) Stream removed, broadcasting: 3\nI0208 13:04:28.351410    3862 log.go:172] (0xc00015c790) (0xc0005e0000) Stream removed, broadcasting: 5\n"
Feb  8 13:04:28.368: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  8 13:04:28.369: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  8 13:04:28.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gzps5 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  8 13:04:29.244: INFO: stderr: "I0208 13:04:28.769131    3887 log.go:172] (0xc000138840) (0xc00065f360) Create stream\nI0208 13:04:28.769313    3887 log.go:172] (0xc000138840) (0xc00065f360) Stream added, broadcasting: 1\nI0208 13:04:28.777089    3887 log.go:172] (0xc000138840) Reply frame received for 1\nI0208 13:04:28.777119    3887 log.go:172] (0xc000138840) (0xc000714000) Create stream\nI0208 13:04:28.777140    3887 log.go:172] (0xc000138840) (0xc000714000) Stream added, broadcasting: 3\nI0208 13:04:28.778401    3887 log.go:172] (0xc000138840) Reply frame received for 3\nI0208 13:04:28.778439    3887 log.go:172] (0xc000138840) (0xc000750000) Create stream\nI0208 13:04:28.778464    3887 log.go:172] (0xc000138840) (0xc000750000) Stream added, broadcasting: 5\nI0208 13:04:28.780496    3887 log.go:172] (0xc000138840) Reply frame received for 5\nI0208 13:04:29.101143    3887 log.go:172] (0xc000138840) Data frame received for 3\nI0208 13:04:29.101298    3887 log.go:172] (0xc000714000) (3) Data frame handling\nI0208 13:04:29.101342    3887 log.go:172] (0xc000714000) (3) Data frame sent\nI0208 13:04:29.232545    3887 log.go:172] (0xc000138840) Data frame received for 1\nI0208 13:04:29.232616    3887 log.go:172] (0xc000138840) (0xc000750000) Stream removed, broadcasting: 5\nI0208 13:04:29.232647    3887 log.go:172] (0xc00065f360) (1) Data frame handling\nI0208 13:04:29.232656    3887 log.go:172] (0xc00065f360) (1) Data frame sent\nI0208 13:04:29.232689    3887 log.go:172] (0xc000138840) (0xc000714000) Stream removed, broadcasting: 3\nI0208 13:04:29.232813    3887 log.go:172] (0xc000138840) (0xc00065f360) Stream removed, broadcasting: 1\nI0208 13:04:29.232833    3887 log.go:172] (0xc000138840) Go away received\nI0208 13:04:29.233478    3887 log.go:172] (0xc000138840) (0xc00065f360) Stream removed, broadcasting: 1\nI0208 13:04:29.233493    3887 log.go:172] (0xc000138840) (0xc000714000) Stream removed, broadcasting: 3\nI0208 13:04:29.233498    3887 log.go:172] (0xc000138840) (0xc000750000) Stream removed, broadcasting: 5\n"
Feb  8 13:04:29.245: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  8 13:04:29.245: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  8 13:04:29.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gzps5 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  8 13:04:29.910: INFO: stderr: "I0208 13:04:29.439357    3908 log.go:172] (0xc00014c840) (0xc00063d4a0) Create stream\nI0208 13:04:29.439553    3908 log.go:172] (0xc00014c840) (0xc00063d4a0) Stream added, broadcasting: 1\nI0208 13:04:29.445629    3908 log.go:172] (0xc00014c840) Reply frame received for 1\nI0208 13:04:29.445693    3908 log.go:172] (0xc00014c840) (0xc0007ca000) Create stream\nI0208 13:04:29.445706    3908 log.go:172] (0xc00014c840) (0xc0007ca000) Stream added, broadcasting: 3\nI0208 13:04:29.447520    3908 log.go:172] (0xc00014c840) Reply frame received for 3\nI0208 13:04:29.447573    3908 log.go:172] (0xc00014c840) (0xc00075e000) Create stream\nI0208 13:04:29.447594    3908 log.go:172] (0xc00014c840) (0xc00075e000) Stream added, broadcasting: 5\nI0208 13:04:29.448701    3908 log.go:172] (0xc00014c840) Reply frame received for 5\nI0208 13:04:29.656404    3908 log.go:172] (0xc00014c840) Data frame received for 3\nI0208 13:04:29.656486    3908 log.go:172] (0xc0007ca000) (3) Data frame handling\nI0208 13:04:29.656510    3908 log.go:172] (0xc0007ca000) (3) Data frame sent\nI0208 13:04:29.897076    3908 log.go:172] (0xc00014c840) Data frame received for 1\nI0208 13:04:29.897199    3908 log.go:172] (0xc00014c840) (0xc00075e000) Stream removed, broadcasting: 5\nI0208 13:04:29.897278    3908 log.go:172] (0xc00063d4a0) (1) Data frame handling\nI0208 13:04:29.897314    3908 log.go:172] (0xc00063d4a0) (1) Data frame sent\nI0208 13:04:29.897366    3908 log.go:172] (0xc00014c840) (0xc0007ca000) Stream removed, broadcasting: 3\nI0208 13:04:29.897446    3908 log.go:172] (0xc00014c840) (0xc00063d4a0) Stream removed, broadcasting: 1\nI0208 13:04:29.898147    3908 log.go:172] (0xc00014c840) (0xc00063d4a0) Stream removed, broadcasting: 1\nI0208 13:04:29.898169    3908 log.go:172] (0xc00014c840) (0xc0007ca000) Stream removed, broadcasting: 3\nI0208 13:04:29.898184    3908 log.go:172] (0xc00014c840) (0xc00075e000) Stream removed, broadcasting: 5\nI0208 13:04:29.898965    3908 log.go:172] (0xc00014c840) Go away received\n"
Feb  8 13:04:29.910: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  8 13:04:29.910: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  8 13:04:29.910: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  8 13:04:59.960: INFO: Deleting all statefulset in ns e2e-tests-statefulset-gzps5
Feb  8 13:04:59.964: INFO: Scaling statefulset ss to 0
Feb  8 13:04:59.973: INFO: Waiting for statefulset status.replicas updated to 0
Feb  8 13:04:59.976: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:05:00.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-gzps5" for this suite.
Feb  8 13:05:08.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:05:08.071: INFO: namespace: e2e-tests-statefulset-gzps5, resource: bindings, ignored listing per whitelist
Feb  8 13:05:08.278: INFO: namespace e2e-tests-statefulset-gzps5 deletion completed in 8.263755146s

• [SLOW TEST:137.179 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:05:08.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  8 13:05:08.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-q9dnx'
Feb  8 13:05:09.136: INFO: stderr: ""
Feb  8 13:05:09.136: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Feb  8 13:05:09.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-q9dnx'
Feb  8 13:05:13.942: INFO: stderr: ""
Feb  8 13:05:13.942: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:05:13.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-q9dnx" for this suite.
Feb  8 13:05:20.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:05:20.186: INFO: namespace: e2e-tests-kubectl-q9dnx, resource: bindings, ignored listing per whitelist
Feb  8 13:05:20.212: INFO: namespace e2e-tests-kubectl-q9dnx deletion completed in 6.249726042s

• [SLOW TEST:11.933 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:05:20.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-a8f267a5-4a73-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  8 13:05:20.544: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a8fdf83c-4a73-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-rzftq" to be "success or failure"
Feb  8 13:05:20.604: INFO: Pod "pod-projected-configmaps-a8fdf83c-4a73-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 60.693828ms
Feb  8 13:05:23.415: INFO: Pod "pod-projected-configmaps-a8fdf83c-4a73-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.871035127s
Feb  8 13:05:25.432: INFO: Pod "pod-projected-configmaps-a8fdf83c-4a73-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.888092812s
Feb  8 13:05:27.469: INFO: Pod "pod-projected-configmaps-a8fdf83c-4a73-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.925638914s
Feb  8 13:05:30.290: INFO: Pod "pod-projected-configmaps-a8fdf83c-4a73-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.746032781s
Feb  8 13:05:33.153: INFO: Pod "pod-projected-configmaps-a8fdf83c-4a73-11ea-95d6-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 12.609641076s
Feb  8 13:05:35.162: INFO: Pod "pod-projected-configmaps-a8fdf83c-4a73-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.618635799s
STEP: Saw pod success
Feb  8 13:05:35.162: INFO: Pod "pod-projected-configmaps-a8fdf83c-4a73-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 13:05:35.167: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-a8fdf83c-4a73-11ea-95d6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  8 13:05:36.397: INFO: Waiting for pod pod-projected-configmaps-a8fdf83c-4a73-11ea-95d6-0242ac110005 to disappear
Feb  8 13:05:36.409: INFO: Pod pod-projected-configmaps-a8fdf83c-4a73-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:05:36.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rzftq" for this suite.
Feb  8 13:05:42.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:05:42.832: INFO: namespace: e2e-tests-projected-rzftq, resource: bindings, ignored listing per whitelist
Feb  8 13:05:42.840: INFO: namespace e2e-tests-projected-rzftq deletion completed in 6.420382097s

• [SLOW TEST:22.628 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:05:42.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-b6l6n A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-b6l6n;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-b6l6n A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-b6l6n;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-b6l6n.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-b6l6n.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-b6l6n.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-b6l6n.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-b6l6n.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-b6l6n.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-b6l6n.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b6l6n.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-b6l6n.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-b6l6n.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-b6l6n.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-b6l6n.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-b6l6n.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 153.100.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.100.153_udp@PTR;check="$$(dig +tcp +noall +answer +search 153.100.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.100.153_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-b6l6n A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-b6l6n;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-b6l6n A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-b6l6n;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-b6l6n.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-b6l6n.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-b6l6n.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-b6l6n.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-b6l6n.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-b6l6n.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-b6l6n.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b6l6n.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-b6l6n.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-b6l6n.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-b6l6n.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-b6l6n.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-b6l6n.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 153.100.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.100.153_udp@PTR;check="$$(dig +tcp +noall +answer +search 153.100.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.100.153_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  8 13:06:03.404: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-b6l6n/dns-test-b6736c36-4a73-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-b6736c36-4a73-11ea-95d6-0242ac110005)
Feb  8 13:06:03.409: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-b6l6n/dns-test-b6736c36-4a73-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-b6736c36-4a73-11ea-95d6-0242ac110005)
Feb  8 13:06:03.417: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-b6l6n from pod e2e-tests-dns-b6l6n/dns-test-b6736c36-4a73-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-b6736c36-4a73-11ea-95d6-0242ac110005)
Feb  8 13:06:03.422: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-b6l6n from pod e2e-tests-dns-b6l6n/dns-test-b6736c36-4a73-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-b6736c36-4a73-11ea-95d6-0242ac110005)
Feb  8 13:06:03.428: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-b6l6n.svc from pod e2e-tests-dns-b6l6n/dns-test-b6736c36-4a73-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-b6736c36-4a73-11ea-95d6-0242ac110005)
Feb  8 13:06:03.432: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-b6l6n.svc from pod e2e-tests-dns-b6l6n/dns-test-b6736c36-4a73-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-b6736c36-4a73-11ea-95d6-0242ac110005)
Feb  8 13:06:03.438: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-b6l6n.svc from pod e2e-tests-dns-b6l6n/dns-test-b6736c36-4a73-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-b6736c36-4a73-11ea-95d6-0242ac110005)
Feb  8 13:06:03.443: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b6l6n.svc from pod e2e-tests-dns-b6l6n/dns-test-b6736c36-4a73-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-b6736c36-4a73-11ea-95d6-0242ac110005)
Feb  8 13:06:03.447: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-b6l6n.svc from pod e2e-tests-dns-b6l6n/dns-test-b6736c36-4a73-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-b6736c36-4a73-11ea-95d6-0242ac110005)
Feb  8 13:06:03.454: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-b6l6n.svc from pod e2e-tests-dns-b6l6n/dns-test-b6736c36-4a73-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-b6736c36-4a73-11ea-95d6-0242ac110005)
Feb  8 13:06:03.461: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-b6l6n/dns-test-b6736c36-4a73-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-b6736c36-4a73-11ea-95d6-0242ac110005)
Feb  8 13:06:03.468: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-b6l6n/dns-test-b6736c36-4a73-11ea-95d6-0242ac110005: the server could not find the requested resource (get pods dns-test-b6736c36-4a73-11ea-95d6-0242ac110005)
Feb  8 13:06:03.477: INFO: Lookups using e2e-tests-dns-b6l6n/dns-test-b6736c36-4a73-11ea-95d6-0242ac110005 failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-b6l6n jessie_tcp@dns-test-service.e2e-tests-dns-b6l6n jessie_udp@dns-test-service.e2e-tests-dns-b6l6n.svc jessie_tcp@dns-test-service.e2e-tests-dns-b6l6n.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-b6l6n.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b6l6n.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-b6l6n.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-b6l6n.svc jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb  8 13:06:09.230: INFO: DNS probes using e2e-tests-dns-b6l6n/dns-test-b6736c36-4a73-11ea-95d6-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:06:09.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-b6l6n" for this suite.
Feb  8 13:06:19.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:06:20.107: INFO: namespace: e2e-tests-dns-b6l6n, resource: bindings, ignored listing per whitelist
Feb  8 13:06:20.107: INFO: namespace e2e-tests-dns-b6l6n deletion completed in 10.305107818s

• [SLOW TEST:37.267 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:06:20.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb  8 13:06:52.879: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tcgn8 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 13:06:52.879: INFO: >>> kubeConfig: /root/.kube/config
I0208 13:06:53.044923       8 log.go:172] (0xc000de3080) (0xc00058d900) Create stream
I0208 13:06:53.045099       8 log.go:172] (0xc000de3080) (0xc00058d900) Stream added, broadcasting: 1
I0208 13:06:53.061485       8 log.go:172] (0xc000de3080) Reply frame received for 1
I0208 13:06:53.061603       8 log.go:172] (0xc000de3080) (0xc0009f6fa0) Create stream
I0208 13:06:53.061621       8 log.go:172] (0xc000de3080) (0xc0009f6fa0) Stream added, broadcasting: 3
I0208 13:06:53.063357       8 log.go:172] (0xc000de3080) Reply frame received for 3
I0208 13:06:53.063405       8 log.go:172] (0xc000de3080) (0xc00058de00) Create stream
I0208 13:06:53.063418       8 log.go:172] (0xc000de3080) (0xc00058de00) Stream added, broadcasting: 5
I0208 13:06:53.078410       8 log.go:172] (0xc000de3080) Reply frame received for 5
I0208 13:06:53.393622       8 log.go:172] (0xc000de3080) Data frame received for 3
I0208 13:06:53.393784       8 log.go:172] (0xc0009f6fa0) (3) Data frame handling
I0208 13:06:53.393853       8 log.go:172] (0xc0009f6fa0) (3) Data frame sent
I0208 13:06:53.707004       8 log.go:172] (0xc000de3080) (0xc0009f6fa0) Stream removed, broadcasting: 3
I0208 13:06:53.707248       8 log.go:172] (0xc000de3080) Data frame received for 1
I0208 13:06:53.707291       8 log.go:172] (0xc00058d900) (1) Data frame handling
I0208 13:06:53.707321       8 log.go:172] (0xc00058d900) (1) Data frame sent
I0208 13:06:53.707362       8 log.go:172] (0xc000de3080) (0xc00058de00) Stream removed, broadcasting: 5
I0208 13:06:53.707465       8 log.go:172] (0xc000de3080) (0xc00058d900) Stream removed, broadcasting: 1
I0208 13:06:53.707542       8 log.go:172] (0xc000de3080) Go away received
I0208 13:06:53.707749       8 log.go:172] (0xc000de3080) (0xc00058d900) Stream removed, broadcasting: 1
I0208 13:06:53.707763       8 log.go:172] (0xc000de3080) (0xc0009f6fa0) Stream removed, broadcasting: 3
I0208 13:06:53.707777       8 log.go:172] (0xc000de3080) (0xc00058de00) Stream removed, broadcasting: 5
Feb  8 13:06:53.707: INFO: Exec stderr: ""
Feb  8 13:06:53.707: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tcgn8 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 13:06:53.708: INFO: >>> kubeConfig: /root/.kube/config
I0208 13:06:53.832391       8 log.go:172] (0xc000de3550) (0xc000379900) Create stream
I0208 13:06:53.832462       8 log.go:172] (0xc000de3550) (0xc000379900) Stream added, broadcasting: 1
I0208 13:06:53.841827       8 log.go:172] (0xc000de3550) Reply frame received for 1
I0208 13:06:53.841870       8 log.go:172] (0xc000de3550) (0xc001f9e280) Create stream
I0208 13:06:53.841879       8 log.go:172] (0xc000de3550) (0xc001f9e280) Stream added, broadcasting: 3
I0208 13:06:53.843086       8 log.go:172] (0xc000de3550) Reply frame received for 3
I0208 13:06:53.843109       8 log.go:172] (0xc000de3550) (0xc000650320) Create stream
I0208 13:06:53.843117       8 log.go:172] (0xc000de3550) (0xc000650320) Stream added, broadcasting: 5
I0208 13:06:53.845344       8 log.go:172] (0xc000de3550) Reply frame received for 5
I0208 13:06:54.094742       8 log.go:172] (0xc000de3550) Data frame received for 3
I0208 13:06:54.094851       8 log.go:172] (0xc001f9e280) (3) Data frame handling
I0208 13:06:54.094890       8 log.go:172] (0xc001f9e280) (3) Data frame sent
I0208 13:06:54.282435       8 log.go:172] (0xc000de3550) Data frame received for 1
I0208 13:06:54.282521       8 log.go:172] (0xc000de3550) (0xc001f9e280) Stream removed, broadcasting: 3
I0208 13:06:54.282586       8 log.go:172] (0xc000379900) (1) Data frame handling
I0208 13:06:54.282621       8 log.go:172] (0xc000379900) (1) Data frame sent
I0208 13:06:54.282640       8 log.go:172] (0xc000de3550) (0xc000650320) Stream removed, broadcasting: 5
I0208 13:06:54.282702       8 log.go:172] (0xc000de3550) (0xc000379900) Stream removed, broadcasting: 1
I0208 13:06:54.282718       8 log.go:172] (0xc000de3550) Go away received
I0208 13:06:54.282881       8 log.go:172] (0xc000de3550) (0xc000379900) Stream removed, broadcasting: 1
I0208 13:06:54.282900       8 log.go:172] (0xc000de3550) (0xc001f9e280) Stream removed, broadcasting: 3
I0208 13:06:54.282916       8 log.go:172] (0xc000de3550) (0xc000650320) Stream removed, broadcasting: 5
Feb  8 13:06:54.282: INFO: Exec stderr: ""
Feb  8 13:06:54.283: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tcgn8 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 13:06:54.283: INFO: >>> kubeConfig: /root/.kube/config
I0208 13:06:54.347651       8 log.go:172] (0xc000de3a20) (0xc0000fc820) Create stream
I0208 13:06:54.347729       8 log.go:172] (0xc000de3a20) (0xc0000fc820) Stream added, broadcasting: 1
I0208 13:06:54.353709       8 log.go:172] (0xc000de3a20) Reply frame received for 1
I0208 13:06:54.353753       8 log.go:172] (0xc000de3a20) (0xc000650500) Create stream
I0208 13:06:54.353762       8 log.go:172] (0xc000de3a20) (0xc000650500) Stream added, broadcasting: 3
I0208 13:06:54.354911       8 log.go:172] (0xc000de3a20) Reply frame received for 3
I0208 13:06:54.354935       8 log.go:172] (0xc000de3a20) (0xc001f9e320) Create stream
I0208 13:06:54.354942       8 log.go:172] (0xc000de3a20) (0xc001f9e320) Stream added, broadcasting: 5
I0208 13:06:54.356098       8 log.go:172] (0xc000de3a20) Reply frame received for 5
I0208 13:06:54.467898       8 log.go:172] (0xc000de3a20) Data frame received for 3
I0208 13:06:54.467966       8 log.go:172] (0xc000650500) (3) Data frame handling
I0208 13:06:54.467987       8 log.go:172] (0xc000650500) (3) Data frame sent
I0208 13:06:54.626055       8 log.go:172] (0xc000de3a20) Data frame received for 1
I0208 13:06:54.626287       8 log.go:172] (0xc000de3a20) (0xc000650500) Stream removed, broadcasting: 3
I0208 13:06:54.626343       8 log.go:172] (0xc0000fc820) (1) Data frame handling
I0208 13:06:54.626391       8 log.go:172] (0xc0000fc820) (1) Data frame sent
I0208 13:06:54.626424       8 log.go:172] (0xc000de3a20) (0xc001f9e320) Stream removed, broadcasting: 5
I0208 13:06:54.626493       8 log.go:172] (0xc000de3a20) (0xc0000fc820) Stream removed, broadcasting: 1
I0208 13:06:54.626540       8 log.go:172] (0xc000de3a20) Go away received
I0208 13:06:54.626807       8 log.go:172] (0xc000de3a20) (0xc0000fc820) Stream removed, broadcasting: 1
I0208 13:06:54.626848       8 log.go:172] (0xc000de3a20) (0xc000650500) Stream removed, broadcasting: 3
I0208 13:06:54.626882       8 log.go:172] (0xc000de3a20) (0xc001f9e320) Stream removed, broadcasting: 5
Feb  8 13:06:54.626: INFO: Exec stderr: ""
Feb  8 13:06:54.627: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tcgn8 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 13:06:54.627: INFO: >>> kubeConfig: /root/.kube/config
I0208 13:06:54.717074       8 log.go:172] (0xc000de3ef0) (0xc00034a960) Create stream
I0208 13:06:54.717331       8 log.go:172] (0xc000de3ef0) (0xc00034a960) Stream added, broadcasting: 1
I0208 13:06:54.729658       8 log.go:172] (0xc000de3ef0) Reply frame received for 1
I0208 13:06:54.729874       8 log.go:172] (0xc000de3ef0) (0xc000650640) Create stream
I0208 13:06:54.729927       8 log.go:172] (0xc000de3ef0) (0xc000650640) Stream added, broadcasting: 3
I0208 13:06:54.731751       8 log.go:172] (0xc000de3ef0) Reply frame received for 3
I0208 13:06:54.731793       8 log.go:172] (0xc000de3ef0) (0xc001f9e3c0) Create stream
I0208 13:06:54.731807       8 log.go:172] (0xc000de3ef0) (0xc001f9e3c0) Stream added, broadcasting: 5
I0208 13:06:54.732934       8 log.go:172] (0xc000de3ef0) Reply frame received for 5
I0208 13:06:54.840693       8 log.go:172] (0xc000de3ef0) Data frame received for 3
I0208 13:06:54.840875       8 log.go:172] (0xc000650640) (3) Data frame handling
I0208 13:06:54.840930       8 log.go:172] (0xc000650640) (3) Data frame sent
I0208 13:06:54.978955       8 log.go:172] (0xc000de3ef0) (0xc000650640) Stream removed, broadcasting: 3
I0208 13:06:54.979303       8 log.go:172] (0xc000de3ef0) Data frame received for 1
I0208 13:06:54.979356       8 log.go:172] (0xc00034a960) (1) Data frame handling
I0208 13:06:54.979407       8 log.go:172] (0xc000de3ef0) (0xc001f9e3c0) Stream removed, broadcasting: 5
I0208 13:06:54.979557       8 log.go:172] (0xc00034a960) (1) Data frame sent
I0208 13:06:54.979606       8 log.go:172] (0xc000de3ef0) (0xc00034a960) Stream removed, broadcasting: 1
I0208 13:06:54.979660       8 log.go:172] (0xc000de3ef0) Go away received
I0208 13:06:54.979978       8 log.go:172] (0xc000de3ef0) (0xc00034a960) Stream removed, broadcasting: 1
I0208 13:06:54.980013       8 log.go:172] (0xc000de3ef0) (0xc000650640) Stream removed, broadcasting: 3
I0208 13:06:54.980023       8 log.go:172] (0xc000de3ef0) (0xc001f9e3c0) Stream removed, broadcasting: 5
Feb  8 13:06:54.980: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb  8 13:06:54.980: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tcgn8 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 13:06:54.980: INFO: >>> kubeConfig: /root/.kube/config
I0208 13:06:55.075892       8 log.go:172] (0xc000aa4580) (0xc0009f7540) Create stream
I0208 13:06:55.076030       8 log.go:172] (0xc000aa4580) (0xc0009f7540) Stream added, broadcasting: 1
I0208 13:06:55.087455       8 log.go:172] (0xc000aa4580) Reply frame received for 1
I0208 13:06:55.087521       8 log.go:172] (0xc000aa4580) (0xc001f9e460) Create stream
I0208 13:06:55.087539       8 log.go:172] (0xc000aa4580) (0xc001f9e460) Stream added, broadcasting: 3
I0208 13:06:55.092044       8 log.go:172] (0xc000aa4580) Reply frame received for 3
I0208 13:06:55.092087       8 log.go:172] (0xc000aa4580) (0xc000760fa0) Create stream
I0208 13:06:55.092122       8 log.go:172] (0xc000aa4580) (0xc000760fa0) Stream added, broadcasting: 5
I0208 13:06:55.094070       8 log.go:172] (0xc000aa4580) Reply frame received for 5
I0208 13:06:55.215153       8 log.go:172] (0xc000aa4580) Data frame received for 3
I0208 13:06:55.215386       8 log.go:172] (0xc001f9e460) (3) Data frame handling
I0208 13:06:55.215422       8 log.go:172] (0xc001f9e460) (3) Data frame sent
I0208 13:06:55.331145       8 log.go:172] (0xc000aa4580) Data frame received for 1
I0208 13:06:55.331250       8 log.go:172] (0xc0009f7540) (1) Data frame handling
I0208 13:06:55.331294       8 log.go:172] (0xc0009f7540) (1) Data frame sent
I0208 13:06:55.331312       8 log.go:172] (0xc000aa4580) (0xc0009f7540) Stream removed, broadcasting: 1
I0208 13:06:55.331629       8 log.go:172] (0xc000aa4580) (0xc001f9e460) Stream removed, broadcasting: 3
I0208 13:06:55.331725       8 log.go:172] (0xc000aa4580) (0xc000760fa0) Stream removed, broadcasting: 5
I0208 13:06:55.331826       8 log.go:172] (0xc000aa4580) (0xc0009f7540) Stream removed, broadcasting: 1
I0208 13:06:55.331844       8 log.go:172] (0xc000aa4580) (0xc001f9e460) Stream removed, broadcasting: 3
I0208 13:06:55.331859       8 log.go:172] (0xc000aa4580) (0xc000760fa0) Stream removed, broadcasting: 5
Feb  8 13:06:55.332: INFO: Exec stderr: ""
Feb  8 13:06:55.332: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tcgn8 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 13:06:55.332: INFO: >>> kubeConfig: /root/.kube/config
I0208 13:06:55.419651       8 log.go:172] (0xc00141e2c0) (0xc0007615e0) Create stream
I0208 13:06:55.419728       8 log.go:172] (0xc00141e2c0) (0xc0007615e0) Stream added, broadcasting: 1
I0208 13:06:55.426672       8 log.go:172] (0xc00141e2c0) Reply frame received for 1
I0208 13:06:55.426731       8 log.go:172] (0xc00141e2c0) (0xc0027f4640) Create stream
I0208 13:06:55.426756       8 log.go:172] (0xc00141e2c0) (0xc0027f4640) Stream added, broadcasting: 3
I0208 13:06:55.428393       8 log.go:172] (0xc00141e2c0) Reply frame received for 3
I0208 13:06:55.428424       8 log.go:172] (0xc00141e2c0) (0xc001f9e5a0) Create stream
I0208 13:06:55.428439       8 log.go:172] (0xc00141e2c0) (0xc001f9e5a0) Stream added, broadcasting: 5
I0208 13:06:55.429426       8 log.go:172] (0xc00141e2c0) Reply frame received for 5
I0208 13:06:55.545857       8 log.go:172] (0xc00141e2c0) Data frame received for 3
I0208 13:06:55.545988       8 log.go:172] (0xc0027f4640) (3) Data frame handling
I0208 13:06:55.546040       8 log.go:172] (0xc0027f4640) (3) Data frame sent
I0208 13:06:55.699512       8 log.go:172] (0xc00141e2c0) Data frame received for 1
I0208 13:06:55.699689       8 log.go:172] (0xc0007615e0) (1) Data frame handling
I0208 13:06:55.699729       8 log.go:172] (0xc0007615e0) (1) Data frame sent
I0208 13:06:55.700474       8 log.go:172] (0xc00141e2c0) (0xc0007615e0) Stream removed, broadcasting: 1
I0208 13:06:55.700601       8 log.go:172] (0xc00141e2c0) (0xc001f9e5a0) Stream removed, broadcasting: 5
I0208 13:06:55.700707       8 log.go:172] (0xc00141e2c0) (0xc0027f4640) Stream removed, broadcasting: 3
I0208 13:06:55.700800       8 log.go:172] (0xc00141e2c0) Go away received
I0208 13:06:55.700910       8 log.go:172] (0xc00141e2c0) (0xc0007615e0) Stream removed, broadcasting: 1
I0208 13:06:55.700949       8 log.go:172] (0xc00141e2c0) (0xc0027f4640) Stream removed, broadcasting: 3
I0208 13:06:55.700998       8 log.go:172] (0xc00141e2c0) (0xc001f9e5a0) Stream removed, broadcasting: 5
Feb  8 13:06:55.701: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb  8 13:06:55.701: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tcgn8 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 13:06:55.701: INFO: >>> kubeConfig: /root/.kube/config
I0208 13:06:55.780066       8 log.go:172] (0xc00141e790) (0xc000761ea0) Create stream
I0208 13:06:55.780199       8 log.go:172] (0xc00141e790) (0xc000761ea0) Stream added, broadcasting: 1
I0208 13:06:55.788097       8 log.go:172] (0xc00141e790) Reply frame received for 1
I0208 13:06:55.788223       8 log.go:172] (0xc00141e790) (0xc0027f46e0) Create stream
I0208 13:06:55.788242       8 log.go:172] (0xc00141e790) (0xc0027f46e0) Stream added, broadcasting: 3
I0208 13:06:55.789907       8 log.go:172] (0xc00141e790) Reply frame received for 3
I0208 13:06:55.789949       8 log.go:172] (0xc00141e790) (0xc0009f7680) Create stream
I0208 13:06:55.789966       8 log.go:172] (0xc00141e790) (0xc0009f7680) Stream added, broadcasting: 5
I0208 13:06:55.792369       8 log.go:172] (0xc00141e790) Reply frame received for 5
I0208 13:06:55.937935       8 log.go:172] (0xc00141e790) Data frame received for 3
I0208 13:06:55.938070       8 log.go:172] (0xc0027f46e0) (3) Data frame handling
I0208 13:06:55.938112       8 log.go:172] (0xc0027f46e0) (3) Data frame sent
I0208 13:06:56.081157       8 log.go:172] (0xc00141e790) Data frame received for 1
I0208 13:06:56.081499       8 log.go:172] (0xc00141e790) (0xc0027f46e0) Stream removed, broadcasting: 3
I0208 13:06:56.081557       8 log.go:172] (0xc000761ea0) (1) Data frame handling
I0208 13:06:56.081584       8 log.go:172] (0xc000761ea0) (1) Data frame sent
I0208 13:06:56.081639       8 log.go:172] (0xc00141e790) (0xc0009f7680) Stream removed, broadcasting: 5
I0208 13:06:56.081708       8 log.go:172] (0xc00141e790) (0xc000761ea0) Stream removed, broadcasting: 1
I0208 13:06:56.081729       8 log.go:172] (0xc00141e790) Go away received
I0208 13:06:56.082724       8 log.go:172] (0xc00141e790) (0xc000761ea0) Stream removed, broadcasting: 1
I0208 13:06:56.082831       8 log.go:172] (0xc00141e790) (0xc0027f46e0) Stream removed, broadcasting: 3
I0208 13:06:56.082849       8 log.go:172] (0xc00141e790) (0xc0009f7680) Stream removed, broadcasting: 5
Feb  8 13:06:56.082: INFO: Exec stderr: ""
Feb  8 13:06:56.083: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tcgn8 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 13:06:56.083: INFO: >>> kubeConfig: /root/.kube/config
I0208 13:06:56.156984       8 log.go:172] (0xc00141ec60) (0xc0003c75e0) Create stream
I0208 13:06:56.157122       8 log.go:172] (0xc00141ec60) (0xc0003c75e0) Stream added, broadcasting: 1
I0208 13:06:56.176780       8 log.go:172] (0xc00141ec60) Reply frame received for 1
I0208 13:06:56.176917       8 log.go:172] (0xc00141ec60) (0xc0027f4780) Create stream
I0208 13:06:56.176931       8 log.go:172] (0xc00141ec60) (0xc0027f4780) Stream added, broadcasting: 3
I0208 13:06:56.179390       8 log.go:172] (0xc00141ec60) Reply frame received for 3
I0208 13:06:56.179423       8 log.go:172] (0xc00141ec60) (0xc0009f7860) Create stream
I0208 13:06:56.179434       8 log.go:172] (0xc00141ec60) (0xc0009f7860) Stream added, broadcasting: 5
I0208 13:06:56.181399       8 log.go:172] (0xc00141ec60) Reply frame received for 5
I0208 13:06:56.323706       8 log.go:172] (0xc00141ec60) Data frame received for 3
I0208 13:06:56.323803       8 log.go:172] (0xc0027f4780) (3) Data frame handling
I0208 13:06:56.323830       8 log.go:172] (0xc0027f4780) (3) Data frame sent
I0208 13:06:56.425851       8 log.go:172] (0xc00141ec60) Data frame received for 1
I0208 13:06:56.426076       8 log.go:172] (0xc00141ec60) (0xc0027f4780) Stream removed, broadcasting: 3
I0208 13:06:56.426128       8 log.go:172] (0xc0003c75e0) (1) Data frame handling
I0208 13:06:56.426145       8 log.go:172] (0xc0003c75e0) (1) Data frame sent
I0208 13:06:56.426171       8 log.go:172] (0xc00141ec60) (0xc0009f7860) Stream removed, broadcasting: 5
I0208 13:06:56.426208       8 log.go:172] (0xc00141ec60) (0xc0003c75e0) Stream removed, broadcasting: 1
I0208 13:06:56.426223       8 log.go:172] (0xc00141ec60) Go away received
I0208 13:06:56.426694       8 log.go:172] (0xc00141ec60) (0xc0003c75e0) Stream removed, broadcasting: 1
I0208 13:06:56.426718       8 log.go:172] (0xc00141ec60) (0xc0027f4780) Stream removed, broadcasting: 3
I0208 13:06:56.426733       8 log.go:172] (0xc00141ec60) (0xc0009f7860) Stream removed, broadcasting: 5
Feb  8 13:06:56.426: INFO: Exec stderr: ""
Feb  8 13:06:56.426: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tcgn8 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 13:06:56.426: INFO: >>> kubeConfig: /root/.kube/config
I0208 13:06:56.589303       8 log.go:172] (0xc00141f130) (0xc0003c7c20) Create stream
I0208 13:06:56.589455       8 log.go:172] (0xc00141f130) (0xc0003c7c20) Stream added, broadcasting: 1
I0208 13:06:56.623392       8 log.go:172] (0xc00141f130) Reply frame received for 1
I0208 13:06:56.623776       8 log.go:172] (0xc00141f130) (0xc0009f7900) Create stream
I0208 13:06:56.623824       8 log.go:172] (0xc00141f130) (0xc0009f7900) Stream added, broadcasting: 3
I0208 13:06:56.628552       8 log.go:172] (0xc00141f130) Reply frame received for 3
I0208 13:06:56.628872       8 log.go:172] (0xc00141f130) (0xc0027f4820) Create stream
I0208 13:06:56.628937       8 log.go:172] (0xc00141f130) (0xc0027f4820) Stream added, broadcasting: 5
I0208 13:06:56.633350       8 log.go:172] (0xc00141f130) Reply frame received for 5
I0208 13:06:56.830450       8 log.go:172] (0xc00141f130) Data frame received for 3
I0208 13:06:56.830488       8 log.go:172] (0xc0009f7900) (3) Data frame handling
I0208 13:06:56.830521       8 log.go:172] (0xc0009f7900) (3) Data frame sent
I0208 13:06:57.008017       8 log.go:172] (0xc00141f130) Data frame received for 1
I0208 13:06:57.008347       8 log.go:172] (0xc00141f130) (0xc0027f4820) Stream removed, broadcasting: 5
I0208 13:06:57.008489       8 log.go:172] (0xc0003c7c20) (1) Data frame handling
I0208 13:06:57.008529       8 log.go:172] (0xc0003c7c20) (1) Data frame sent
I0208 13:06:57.008718       8 log.go:172] (0xc00141f130) (0xc0009f7900) Stream removed, broadcasting: 3
I0208 13:06:57.008774       8 log.go:172] (0xc00141f130) (0xc0003c7c20) Stream removed, broadcasting: 1
I0208 13:06:57.008794       8 log.go:172] (0xc00141f130) Go away received
I0208 13:06:57.009243       8 log.go:172] (0xc00141f130) (0xc0003c7c20) Stream removed, broadcasting: 1
I0208 13:06:57.009267       8 log.go:172] (0xc00141f130) (0xc0009f7900) Stream removed, broadcasting: 3
I0208 13:06:57.009274       8 log.go:172] (0xc00141f130) (0xc0027f4820) Stream removed, broadcasting: 5
Feb  8 13:06:57.009: INFO: Exec stderr: ""
Feb  8 13:06:57.009: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tcgn8 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 13:06:57.009: INFO: >>> kubeConfig: /root/.kube/config
I0208 13:06:57.149519       8 log.go:172] (0xc00232a2c0) (0xc0027f4aa0) Create stream
I0208 13:06:57.149670       8 log.go:172] (0xc00232a2c0) (0xc0027f4aa0) Stream added, broadcasting: 1
I0208 13:06:57.157509       8 log.go:172] (0xc00232a2c0) Reply frame received for 1
I0208 13:06:57.157591       8 log.go:172] (0xc00232a2c0) (0xc001f9e640) Create stream
I0208 13:06:57.157615       8 log.go:172] (0xc00232a2c0) (0xc001f9e640) Stream added, broadcasting: 3
I0208 13:06:57.159442       8 log.go:172] (0xc00232a2c0) Reply frame received for 3
I0208 13:06:57.159548       8 log.go:172] (0xc00232a2c0) (0xc0027f4b40) Create stream
I0208 13:06:57.159576       8 log.go:172] (0xc00232a2c0) (0xc0027f4b40) Stream added, broadcasting: 5
I0208 13:06:57.161182       8 log.go:172] (0xc00232a2c0) Reply frame received for 5
I0208 13:06:57.354329       8 log.go:172] (0xc00232a2c0) Data frame received for 3
I0208 13:06:57.354447       8 log.go:172] (0xc001f9e640) (3) Data frame handling
I0208 13:06:57.354474       8 log.go:172] (0xc001f9e640) (3) Data frame sent
I0208 13:06:57.468390       8 log.go:172] (0xc00232a2c0) Data frame received for 1
I0208 13:06:57.468463       8 log.go:172] (0xc00232a2c0) (0xc001f9e640) Stream removed, broadcasting: 3
I0208 13:06:57.468567       8 log.go:172] (0xc0027f4aa0) (1) Data frame handling
I0208 13:06:57.468588       8 log.go:172] (0xc0027f4aa0) (1) Data frame sent
I0208 13:06:57.468612       8 log.go:172] (0xc00232a2c0) (0xc0027f4aa0) Stream removed, broadcasting: 1
I0208 13:06:57.468667       8 log.go:172] (0xc00232a2c0) (0xc0027f4b40) Stream removed, broadcasting: 5
I0208 13:06:57.468814       8 log.go:172] (0xc00232a2c0) (0xc0027f4aa0) Stream removed, broadcasting: 1
I0208 13:06:57.468830       8 log.go:172] (0xc00232a2c0) (0xc001f9e640) Stream removed, broadcasting: 3
I0208 13:06:57.468847       8 log.go:172] (0xc00232a2c0) (0xc0027f4b40) Stream removed, broadcasting: 5
Feb  8 13:06:57.469: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:06:57.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-tcgn8" for this suite.
Feb  8 13:07:55.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:07:55.704: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-tcgn8, resource: bindings, ignored listing per whitelist
Feb  8 13:07:55.719: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-tcgn8 deletion completed in 58.237429558s

• [SLOW TEST:95.611 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:07:55.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:08:02.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-ht2d2" for this suite.
Feb  8 13:08:10.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:08:10.927: INFO: namespace: e2e-tests-namespaces-ht2d2, resource: bindings, ignored listing per whitelist
Feb  8 13:08:11.029: INFO: namespace e2e-tests-namespaces-ht2d2 deletion completed in 8.184479215s
STEP: Destroying namespace "e2e-tests-nsdeletetest-6knjg" for this suite.
Feb  8 13:08:11.034: INFO: Namespace e2e-tests-nsdeletetest-6knjg was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-jbfb7" for this suite.
Feb  8 13:08:17.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:08:17.113: INFO: namespace: e2e-tests-nsdeletetest-jbfb7, resource: bindings, ignored listing per whitelist
Feb  8 13:08:17.246: INFO: namespace e2e-tests-nsdeletetest-jbfb7 deletion completed in 6.212026124s

• [SLOW TEST:21.526 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:08:17.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  8 13:08:17.847: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12abb9ee-4a74-11ea-95d6-0242ac110005" in namespace "e2e-tests-projected-89tdj" to be "success or failure"
Feb  8 13:08:18.034: INFO: Pod "downwardapi-volume-12abb9ee-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 187.516411ms
Feb  8 13:08:20.155: INFO: Pod "downwardapi-volume-12abb9ee-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.307678252s
Feb  8 13:08:22.180: INFO: Pod "downwardapi-volume-12abb9ee-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33293295s
Feb  8 13:08:24.736: INFO: Pod "downwardapi-volume-12abb9ee-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.888558547s
Feb  8 13:08:26.764: INFO: Pod "downwardapi-volume-12abb9ee-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.917087411s
Feb  8 13:08:29.161: INFO: Pod "downwardapi-volume-12abb9ee-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.314456394s
Feb  8 13:08:31.188: INFO: Pod "downwardapi-volume-12abb9ee-4a74-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.341440428s
STEP: Saw pod success
Feb  8 13:08:31.189: INFO: Pod "downwardapi-volume-12abb9ee-4a74-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 13:08:31.222: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-12abb9ee-4a74-11ea-95d6-0242ac110005 container client-container: 
STEP: delete the pod
Feb  8 13:08:31.802: INFO: Waiting for pod downwardapi-volume-12abb9ee-4a74-11ea-95d6-0242ac110005 to disappear
Feb  8 13:08:31.865: INFO: Pod downwardapi-volume-12abb9ee-4a74-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:08:31.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-89tdj" for this suite.
Feb  8 13:08:40.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:08:40.315: INFO: namespace: e2e-tests-projected-89tdj, resource: bindings, ignored listing per whitelist
Feb  8 13:08:40.418: INFO: namespace e2e-tests-projected-89tdj deletion completed in 8.53901253s

• [SLOW TEST:23.171 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:08:40.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-2jsd
STEP: Creating a pod to test atomic-volume-subpath
Feb  8 13:08:41.007: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-2jsd" in namespace "e2e-tests-subpath-89vqw" to be "success or failure"
Feb  8 13:08:41.209: INFO: Pod "pod-subpath-test-projected-2jsd": Phase="Pending", Reason="", readiness=false. Elapsed: 201.492574ms
Feb  8 13:08:43.739: INFO: Pod "pod-subpath-test-projected-2jsd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.731690493s
Feb  8 13:08:45.754: INFO: Pod "pod-subpath-test-projected-2jsd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.745945726s
Feb  8 13:08:48.382: INFO: Pod "pod-subpath-test-projected-2jsd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.374027647s
Feb  8 13:08:50.406: INFO: Pod "pod-subpath-test-projected-2jsd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.398673189s
Feb  8 13:08:52.436: INFO: Pod "pod-subpath-test-projected-2jsd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.428624126s
Feb  8 13:08:54.917: INFO: Pod "pod-subpath-test-projected-2jsd": Phase="Pending", Reason="", readiness=false. Elapsed: 13.909352171s
Feb  8 13:08:56.932: INFO: Pod "pod-subpath-test-projected-2jsd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.924547256s
Feb  8 13:08:58.945: INFO: Pod "pod-subpath-test-projected-2jsd": Phase="Pending", Reason="", readiness=false. Elapsed: 17.937738165s
Feb  8 13:09:00.958: INFO: Pod "pod-subpath-test-projected-2jsd": Phase="Running", Reason="", readiness=false. Elapsed: 19.950632482s
Feb  8 13:09:02.982: INFO: Pod "pod-subpath-test-projected-2jsd": Phase="Running", Reason="", readiness=false. Elapsed: 21.974147361s
Feb  8 13:09:04.996: INFO: Pod "pod-subpath-test-projected-2jsd": Phase="Running", Reason="", readiness=false. Elapsed: 23.988394583s
Feb  8 13:09:07.018: INFO: Pod "pod-subpath-test-projected-2jsd": Phase="Running", Reason="", readiness=false. Elapsed: 26.009999854s
Feb  8 13:09:09.031: INFO: Pod "pod-subpath-test-projected-2jsd": Phase="Running", Reason="", readiness=false. Elapsed: 28.023449966s
Feb  8 13:09:11.048: INFO: Pod "pod-subpath-test-projected-2jsd": Phase="Running", Reason="", readiness=false. Elapsed: 30.040089174s
Feb  8 13:09:13.061: INFO: Pod "pod-subpath-test-projected-2jsd": Phase="Running", Reason="", readiness=false. Elapsed: 32.053894734s
Feb  8 13:09:15.073: INFO: Pod "pod-subpath-test-projected-2jsd": Phase="Running", Reason="", readiness=false. Elapsed: 34.065806953s
Feb  8 13:09:17.575: INFO: Pod "pod-subpath-test-projected-2jsd": Phase="Running", Reason="", readiness=false. Elapsed: 36.567266526s
Feb  8 13:09:19.589: INFO: Pod "pod-subpath-test-projected-2jsd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.581657983s
STEP: Saw pod success
Feb  8 13:09:19.589: INFO: Pod "pod-subpath-test-projected-2jsd" satisfied condition "success or failure"
Feb  8 13:09:19.597: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-2jsd container test-container-subpath-projected-2jsd: 
STEP: delete the pod
Feb  8 13:09:21.440: INFO: Waiting for pod pod-subpath-test-projected-2jsd to disappear
Feb  8 13:09:21.473: INFO: Pod pod-subpath-test-projected-2jsd no longer exists
STEP: Deleting pod pod-subpath-test-projected-2jsd
Feb  8 13:09:21.474: INFO: Deleting pod "pod-subpath-test-projected-2jsd" in namespace "e2e-tests-subpath-89vqw"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:09:21.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-89vqw" for this suite.
Feb  8 13:09:27.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:09:27.766: INFO: namespace: e2e-tests-subpath-89vqw, resource: bindings, ignored listing per whitelist
Feb  8 13:09:27.968: INFO: namespace e2e-tests-subpath-89vqw deletion completed in 6.47167119s

• [SLOW TEST:47.549 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:09:27.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-3cd31f09-4a74-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  8 13:09:28.579: INFO: Waiting up to 5m0s for pod "pod-secrets-3cd6ebe4-4a74-11ea-95d6-0242ac110005" in namespace "e2e-tests-secrets-76t2q" to be "success or failure"
Feb  8 13:09:28.593: INFO: Pod "pod-secrets-3cd6ebe4-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.394088ms
Feb  8 13:09:30.956: INFO: Pod "pod-secrets-3cd6ebe4-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.377748996s
Feb  8 13:09:32.974: INFO: Pod "pod-secrets-3cd6ebe4-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.395188135s
Feb  8 13:09:36.135: INFO: Pod "pod-secrets-3cd6ebe4-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.556501475s
Feb  8 13:09:38.155: INFO: Pod "pod-secrets-3cd6ebe4-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.57637114s
Feb  8 13:09:40.180: INFO: Pod "pod-secrets-3cd6ebe4-4a74-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.601020614s
STEP: Saw pod success
Feb  8 13:09:40.180: INFO: Pod "pod-secrets-3cd6ebe4-4a74-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 13:09:40.192: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-3cd6ebe4-4a74-11ea-95d6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  8 13:09:40.370: INFO: Waiting for pod pod-secrets-3cd6ebe4-4a74-11ea-95d6-0242ac110005 to disappear
Feb  8 13:09:40.413: INFO: Pod pod-secrets-3cd6ebe4-4a74-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:09:40.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-76t2q" for this suite.
Feb  8 13:09:46.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:09:46.929: INFO: namespace: e2e-tests-secrets-76t2q, resource: bindings, ignored listing per whitelist
Feb  8 13:09:46.969: INFO: namespace e2e-tests-secrets-76t2q deletion completed in 6.466307448s

• [SLOW TEST:19.000 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:09:46.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Feb  8 13:09:47.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb  8 13:09:47.726: INFO: stderr: ""
Feb  8 13:09:47.727: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:09:47.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-p85cc" for this suite.
Feb  8 13:09:53.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:09:54.003: INFO: namespace: e2e-tests-kubectl-p85cc, resource: bindings, ignored listing per whitelist
Feb  8 13:09:54.009: INFO: namespace e2e-tests-kubectl-p85cc deletion completed in 6.265625079s

• [SLOW TEST:7.039 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:09:54.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Feb  8 13:09:54.321: INFO: Waiting up to 5m0s for pod "client-containers-4c1b91f5-4a74-11ea-95d6-0242ac110005" in namespace "e2e-tests-containers-crzpr" to be "success or failure"
Feb  8 13:09:54.341: INFO: Pod "client-containers-4c1b91f5-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.375194ms
Feb  8 13:09:56.354: INFO: Pod "client-containers-4c1b91f5-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032891104s
Feb  8 13:09:58.367: INFO: Pod "client-containers-4c1b91f5-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045597145s
Feb  8 13:10:00.555: INFO: Pod "client-containers-4c1b91f5-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.233744426s
Feb  8 13:10:02.777: INFO: Pod "client-containers-4c1b91f5-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.455858331s
Feb  8 13:10:04.788: INFO: Pod "client-containers-4c1b91f5-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.467000424s
Feb  8 13:10:06.807: INFO: Pod "client-containers-4c1b91f5-4a74-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.485241525s
STEP: Saw pod success
Feb  8 13:10:06.807: INFO: Pod "client-containers-4c1b91f5-4a74-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 13:10:06.817: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-4c1b91f5-4a74-11ea-95d6-0242ac110005 container test-container: 
STEP: delete the pod
Feb  8 13:10:08.199: INFO: Waiting for pod client-containers-4c1b91f5-4a74-11ea-95d6-0242ac110005 to disappear
Feb  8 13:10:08.371: INFO: Pod client-containers-4c1b91f5-4a74-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:10:08.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-crzpr" for this suite.
Feb  8 13:10:16.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:10:16.553: INFO: namespace: e2e-tests-containers-crzpr, resource: bindings, ignored listing per whitelist
Feb  8 13:10:16.772: INFO: namespace e2e-tests-containers-crzpr deletion completed in 8.390315546s

• [SLOW TEST:22.762 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:10:16.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  8 13:10:27.801: INFO: Successfully updated pod "annotationupdate59c18401-4a74-11ea-95d6-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:10:29.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bkh49" for this suite.
Feb  8 13:10:54.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:10:54.119: INFO: namespace: e2e-tests-downward-api-bkh49, resource: bindings, ignored listing per whitelist
Feb  8 13:10:54.196: INFO: namespace e2e-tests-downward-api-bkh49 deletion completed in 24.232067837s

• [SLOW TEST:37.423 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:10:54.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Feb  8 13:10:54.413: INFO: Waiting up to 5m0s for pod "var-expansion-6ff2a585-4a74-11ea-95d6-0242ac110005" in namespace "e2e-tests-var-expansion-mzs2d" to be "success or failure"
Feb  8 13:10:54.420: INFO: Pod "var-expansion-6ff2a585-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414139ms
Feb  8 13:10:56.432: INFO: Pod "var-expansion-6ff2a585-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018514854s
Feb  8 13:10:58.450: INFO: Pod "var-expansion-6ff2a585-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036442323s
Feb  8 13:11:01.100: INFO: Pod "var-expansion-6ff2a585-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.686769887s
Feb  8 13:11:03.127: INFO: Pod "var-expansion-6ff2a585-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.713195537s
Feb  8 13:11:05.202: INFO: Pod "var-expansion-6ff2a585-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.78858154s
Feb  8 13:11:07.219: INFO: Pod "var-expansion-6ff2a585-4a74-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.805217735s
STEP: Saw pod success
Feb  8 13:11:07.219: INFO: Pod "var-expansion-6ff2a585-4a74-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 13:11:07.230: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-6ff2a585-4a74-11ea-95d6-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  8 13:11:07.382: INFO: Waiting for pod var-expansion-6ff2a585-4a74-11ea-95d6-0242ac110005 to disappear
Feb  8 13:11:07.409: INFO: Pod var-expansion-6ff2a585-4a74-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:11:07.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-mzs2d" for this suite.
Feb  8 13:11:13.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:11:14.174: INFO: namespace: e2e-tests-var-expansion-mzs2d, resource: bindings, ignored listing per whitelist
Feb  8 13:11:14.219: INFO: namespace e2e-tests-var-expansion-mzs2d deletion completed in 6.641351193s

• [SLOW TEST:20.022 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:11:14.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  8 13:11:14.481: INFO: Waiting up to 5m0s for pod "downward-api-7bf4d253-4a74-11ea-95d6-0242ac110005" in namespace "e2e-tests-downward-api-kxhqn" to be "success or failure"
Feb  8 13:11:14.598: INFO: Pod "downward-api-7bf4d253-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 116.60185ms
Feb  8 13:11:16.623: INFO: Pod "downward-api-7bf4d253-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141911314s
Feb  8 13:11:18.655: INFO: Pod "downward-api-7bf4d253-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174281224s
Feb  8 13:11:20.710: INFO: Pod "downward-api-7bf4d253-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.228959002s
Feb  8 13:11:22.723: INFO: Pod "downward-api-7bf4d253-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.242063041s
Feb  8 13:11:24.771: INFO: Pod "downward-api-7bf4d253-4a74-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.289655665s
STEP: Saw pod success
Feb  8 13:11:24.771: INFO: Pod "downward-api-7bf4d253-4a74-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 13:11:24.860: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-7bf4d253-4a74-11ea-95d6-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  8 13:11:25.246: INFO: Waiting for pod downward-api-7bf4d253-4a74-11ea-95d6-0242ac110005 to disappear
Feb  8 13:11:25.263: INFO: Pod downward-api-7bf4d253-4a74-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:11:25.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-kxhqn" for this suite.
Feb  8 13:11:31.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:11:31.405: INFO: namespace: e2e-tests-downward-api-kxhqn, resource: bindings, ignored listing per whitelist
Feb  8 13:11:31.445: INFO: namespace e2e-tests-downward-api-kxhqn deletion completed in 6.168719738s

• [SLOW TEST:17.226 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:11:31.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  8 13:11:31.797: INFO: Waiting up to 5m0s for pod "pod-86481f66-4a74-11ea-95d6-0242ac110005" in namespace "e2e-tests-emptydir-gmzw6" to be "success or failure"
Feb  8 13:11:31.832: INFO: Pod "pod-86481f66-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.39027ms
Feb  8 13:11:34.062: INFO: Pod "pod-86481f66-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.264833441s
Feb  8 13:11:36.084: INFO: Pod "pod-86481f66-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.286706814s
Feb  8 13:11:38.414: INFO: Pod "pod-86481f66-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.617098231s
Feb  8 13:11:40.430: INFO: Pod "pod-86481f66-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.633215347s
Feb  8 13:11:42.652: INFO: Pod "pod-86481f66-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.854974328s
Feb  8 13:11:45.303: INFO: Pod "pod-86481f66-4a74-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.506324252s
STEP: Saw pod success
Feb  8 13:11:45.304: INFO: Pod "pod-86481f66-4a74-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 13:11:46.073: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-86481f66-4a74-11ea-95d6-0242ac110005 container test-container: 
STEP: delete the pod
Feb  8 13:11:46.424: INFO: Waiting for pod pod-86481f66-4a74-11ea-95d6-0242ac110005 to disappear
Feb  8 13:11:46.544: INFO: Pod pod-86481f66-4a74-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:11:46.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-gmzw6" for this suite.
Feb  8 13:11:52.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:11:52.930: INFO: namespace: e2e-tests-emptydir-gmzw6, resource: bindings, ignored listing per whitelist
Feb  8 13:11:52.972: INFO: namespace e2e-tests-emptydir-gmzw6 deletion completed in 6.372839495s

• [SLOW TEST:21.526 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:11:52.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  8 13:11:53.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb  8 13:11:54.049: INFO: stderr: ""
Feb  8 13:11:54.049: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:11:54.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nfwv5" for this suite.
Feb  8 13:12:00.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:12:00.229: INFO: namespace: e2e-tests-kubectl-nfwv5, resource: bindings, ignored listing per whitelist
Feb  8 13:12:00.272: INFO: namespace e2e-tests-kubectl-nfwv5 deletion completed in 6.196334888s

• [SLOW TEST:7.301 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:12:00.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-977cd89d-4a74-11ea-95d6-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  8 13:12:00.972: INFO: Waiting up to 5m0s for pod "pod-secrets-97a9bd40-4a74-11ea-95d6-0242ac110005" in namespace "e2e-tests-secrets-xfjm6" to be "success or failure"
Feb  8 13:12:01.226: INFO: Pod "pod-secrets-97a9bd40-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 253.862041ms
Feb  8 13:12:03.379: INFO: Pod "pod-secrets-97a9bd40-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.407063112s
Feb  8 13:12:05.388: INFO: Pod "pod-secrets-97a9bd40-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.415575192s
Feb  8 13:12:09.796: INFO: Pod "pod-secrets-97a9bd40-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.823694023s
Feb  8 13:12:11.834: INFO: Pod "pod-secrets-97a9bd40-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.861973561s
Feb  8 13:12:14.120: INFO: Pod "pod-secrets-97a9bd40-4a74-11ea-95d6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.148276363s
Feb  8 13:12:16.134: INFO: Pod "pod-secrets-97a9bd40-4a74-11ea-95d6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.162549807s
STEP: Saw pod success
Feb  8 13:12:16.135: INFO: Pod "pod-secrets-97a9bd40-4a74-11ea-95d6-0242ac110005" satisfied condition "success or failure"
Feb  8 13:12:16.144: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-97a9bd40-4a74-11ea-95d6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  8 13:12:19.007: INFO: Waiting for pod pod-secrets-97a9bd40-4a74-11ea-95d6-0242ac110005 to disappear
Feb  8 13:12:19.022: INFO: Pod pod-secrets-97a9bd40-4a74-11ea-95d6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:12:19.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-xfjm6" for this suite.
Feb  8 13:12:25.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:12:25.151: INFO: namespace: e2e-tests-secrets-xfjm6, resource: bindings, ignored listing per whitelist
Feb  8 13:12:25.217: INFO: namespace e2e-tests-secrets-xfjm6 deletion completed in 6.177847084s

• [SLOW TEST:24.944 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  8 13:12:25.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  8 13:12:51.710: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  8 13:12:51.721: INFO: Pod pod-with-prestop-http-hook still exists
Feb  8 13:12:53.721: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  8 13:12:53.736: INFO: Pod pod-with-prestop-http-hook still exists
Feb  8 13:12:55.721: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  8 13:12:55.735: INFO: Pod pod-with-prestop-http-hook still exists
Feb  8 13:12:57.721: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  8 13:12:57.784: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  8 13:12:57.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-xqzr6" for this suite.
Feb  8 13:13:23.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:13:24.363: INFO: namespace: e2e-tests-container-lifecycle-hook-xqzr6, resource: bindings, ignored listing per whitelist
Feb  8 13:13:24.369: INFO: namespace e2e-tests-container-lifecycle-hook-xqzr6 deletion completed in 26.53708857s

• [SLOW TEST:59.151 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SFeb  8 13:13:24.369: INFO: Running AfterSuite actions on all nodes
Feb  8 13:13:24.369: INFO: Running AfterSuite actions on node 1
Feb  8 13:13:24.370: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-storage] Projected downwardAPI [It] should provide container's memory limit [NodeConformance] [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2395

Ran 199 of 2164 Specs in 8770.371 seconds
FAIL! -- 198 Passed | 1 Failed | 0 Pending | 1965 Skipped --- FAIL: TestE2E (8770.73s)
FAIL