I0104 13:44:24.236543 9 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0104 13:44:24.236891 9 e2e.go:109] Starting e2e run "b8c344be-d34b-4d1b-befa-001e925c83f6" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578145463 - Will randomize all specs Will run 278 of 4814 specs Jan 4 13:44:24.293: INFO: >>> kubeConfig: /root/.kube/config Jan 4 13:44:24.295: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 4 13:44:24.320: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 4 13:44:24.350: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 4 13:44:24.350: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 4 13:44:24.350: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 4 13:44:24.367: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 4 13:44:24.367: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 4 13:44:24.367: INFO: e2e test version: v1.17.0 Jan 4 13:44:24.369: INFO: kube-apiserver version: v1.17.0 Jan 4 13:44:24.369: INFO: >>> kubeConfig: /root/.kube/config Jan 4 13:44:24.375: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:44:24.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Jan 4 13:44:24.502: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-47677e2a-100f-4644-bd62-7ad2a270e4bf STEP: Creating a pod to test consume configMaps Jan 4 13:44:24.529: INFO: Waiting up to 5m0s for pod "pod-configmaps-509e0df6-0400-48c3-8fdb-e39d57b3ef96" in namespace "configmap-1370" to be "success or failure" Jan 4 13:44:24.617: INFO: Pod "pod-configmaps-509e0df6-0400-48c3-8fdb-e39d57b3ef96": Phase="Pending", Reason="", readiness=false. Elapsed: 88.090715ms Jan 4 13:44:26.627: INFO: Pod "pod-configmaps-509e0df6-0400-48c3-8fdb-e39d57b3ef96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098522448s Jan 4 13:44:28.636: INFO: Pod "pod-configmaps-509e0df6-0400-48c3-8fdb-e39d57b3ef96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106778702s Jan 4 13:44:30.662: INFO: Pod "pod-configmaps-509e0df6-0400-48c3-8fdb-e39d57b3ef96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133398681s Jan 4 13:44:32.667: INFO: Pod "pod-configmaps-509e0df6-0400-48c3-8fdb-e39d57b3ef96": Phase="Pending", Reason="", readiness=false. Elapsed: 8.138119772s Jan 4 13:44:34.674: INFO: Pod "pod-configmaps-509e0df6-0400-48c3-8fdb-e39d57b3ef96": Phase="Pending", Reason="", readiness=false. Elapsed: 10.144895648s Jan 4 13:44:36.680: INFO: Pod "pod-configmaps-509e0df6-0400-48c3-8fdb-e39d57b3ef96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.151439196s STEP: Saw pod success Jan 4 13:44:36.681: INFO: Pod "pod-configmaps-509e0df6-0400-48c3-8fdb-e39d57b3ef96" satisfied condition "success or failure" Jan 4 13:44:36.684: INFO: Trying to get logs from node jerma-node pod pod-configmaps-509e0df6-0400-48c3-8fdb-e39d57b3ef96 container configmap-volume-test: STEP: delete the pod Jan 4 13:44:36.769: INFO: Waiting for pod pod-configmaps-509e0df6-0400-48c3-8fdb-e39d57b3ef96 to disappear Jan 4 13:44:36.786: INFO: Pod pod-configmaps-509e0df6-0400-48c3-8fdb-e39d57b3ef96 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:44:36.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1370" for this suite. • [SLOW TEST:12.418 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":18,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:44:36.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jan 4 13:44:37.468: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 4 13:44:37.497: INFO: Waiting for terminating namespaces to be deleted... Jan 4 13:44:37.499: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 4 13:44:37.513: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 4 13:44:37.514: INFO: Container kube-proxy ready: true, restart count 0 Jan 4 13:44:37.514: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 4 13:44:37.514: INFO: Container weave ready: true, restart count 1 Jan 4 13:44:37.514: INFO: Container weave-npc ready: true, restart count 0 Jan 4 13:44:37.514: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 4 13:44:37.532: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 4 13:44:37.532: INFO: Container kube-apiserver ready: true, restart count 1 Jan 4 13:44:37.532: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 4 13:44:37.532: INFO: Container kube-controller-manager ready: true, restart count 1 Jan 4 13:44:37.532: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 4 13:44:37.532: INFO: Container etcd ready: true, restart count 1 Jan 4 13:44:37.532: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 4 13:44:37.532: INFO: Container kube-proxy ready: true, restart count 0 Jan 4 13:44:37.532: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 4 13:44:37.532: INFO: Container weave ready: true, restart count 0 Jan 4 13:44:37.532: INFO: Container weave-npc ready: true, restart count 0 Jan 4 13:44:37.532: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 4 13:44:37.532: INFO: Container kube-scheduler ready: true, restart count 1 Jan 4 13:44:37.532: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 4 13:44:37.532: INFO: Container coredns ready: true, restart count 0 Jan 4 13:44:37.532: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 4 13:44:37.532: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-node STEP: verifying the node has the label node jerma-server-mvvl6gufaqub Jan 4 13:44:37.757: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Jan 4 13:44:37.757: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Jan 4 13:44:37.757: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Jan 4 13:44:37.757: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub Jan 4 13:44:37.757: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub Jan 4 13:44:37.757: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Jan 4 13:44:37.757: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node Jan 4 13:44:37.757: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Jan 4 13:44:37.757: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node Jan 4 13:44:37.757: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub STEP: Starting Pods to consume most of the cluster CPU. Jan 4 13:44:37.757: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node Jan 4 13:44:37.814: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-ad789a8a-e474-4dc6-a44b-9950ac255018.15e6b2f363676c70], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8746/filler-pod-ad789a8a-e474-4dc6-a44b-9950ac255018 to jerma-server-mvvl6gufaqub] STEP: Considering event: Type = [Normal], Name = [filler-pod-ad789a8a-e474-4dc6-a44b-9950ac255018.15e6b2f44a28cff3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-ad789a8a-e474-4dc6-a44b-9950ac255018.15e6b2f4e707cfa6], Reason = [Created], Message = [Created container filler-pod-ad789a8a-e474-4dc6-a44b-9950ac255018] STEP: Considering event: Type = [Normal], Name = [filler-pod-ad789a8a-e474-4dc6-a44b-9950ac255018.15e6b2f50950f422], Reason = [Started], Message = [Started container filler-pod-ad789a8a-e474-4dc6-a44b-9950ac255018] STEP: Considering event: Type = [Normal], Name = [filler-pod-f9d8b8f2-603b-43c0-83c3-a4329f2b002e.15e6b2f3607e654d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8746/filler-pod-f9d8b8f2-603b-43c0-83c3-a4329f2b002e to jerma-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-f9d8b8f2-603b-43c0-83c3-a4329f2b002e.15e6b2f472cdb63a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f9d8b8f2-603b-43c0-83c3-a4329f2b002e.15e6b2f4f900eddd], Reason = [Created], Message = [Created container filler-pod-f9d8b8f2-603b-43c0-83c3-a4329f2b002e] STEP: Considering event: Type = [Normal], Name = [filler-pod-f9d8b8f2-603b-43c0-83c3-a4329f2b002e.15e6b2f524876a0f], Reason = [Started], Message = [Started container filler-pod-f9d8b8f2-603b-43c0-83c3-a4329f2b002e] STEP: Considering event: Type = [Warning], Name = [additional-pod.15e6b2f5b8f0ab21], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node jerma-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-server-mvvl6gufaqub STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:44:49.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8746" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:12.553 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":2,"skipped":27,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:44:49.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-5c91b1fd-28a5-4e45-a175-b2ab0af8108e STEP: Creating a pod to test consume secrets Jan 4 13:44:49.641: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c57db37a-ea70-48aa-9991-e0968bce1633" in namespace "projected-2863" to be "success or failure" Jan 4 13:44:49.757: INFO: Pod "pod-projected-secrets-c57db37a-ea70-48aa-9991-e0968bce1633": Phase="Pending", Reason="", readiness=false. Elapsed: 115.286381ms Jan 4 13:44:51.761: INFO: Pod "pod-projected-secrets-c57db37a-ea70-48aa-9991-e0968bce1633": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11945695s Jan 4 13:44:53.772: INFO: Pod "pod-projected-secrets-c57db37a-ea70-48aa-9991-e0968bce1633": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13082792s Jan 4 13:44:56.437: INFO: Pod "pod-projected-secrets-c57db37a-ea70-48aa-9991-e0968bce1633": Phase="Pending", Reason="", readiness=false. Elapsed: 6.795081231s Jan 4 13:44:58.547: INFO: Pod "pod-projected-secrets-c57db37a-ea70-48aa-9991-e0968bce1633": Phase="Pending", Reason="", readiness=false. Elapsed: 8.905479345s Jan 4 13:45:00.554: INFO: Pod "pod-projected-secrets-c57db37a-ea70-48aa-9991-e0968bce1633": Phase="Pending", Reason="", readiness=false. Elapsed: 10.912850109s Jan 4 13:45:02.562: INFO: Pod "pod-projected-secrets-c57db37a-ea70-48aa-9991-e0968bce1633": Phase="Pending", Reason="", readiness=false. Elapsed: 12.920419827s Jan 4 13:45:04.568: INFO: Pod "pod-projected-secrets-c57db37a-ea70-48aa-9991-e0968bce1633": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.926653667s STEP: Saw pod success Jan 4 13:45:04.568: INFO: Pod "pod-projected-secrets-c57db37a-ea70-48aa-9991-e0968bce1633" satisfied condition "success or failure" Jan 4 13:45:04.572: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-projected-secrets-c57db37a-ea70-48aa-9991-e0968bce1633 container projected-secret-volume-test: STEP: delete the pod Jan 4 13:45:04.718: INFO: Waiting for pod pod-projected-secrets-c57db37a-ea70-48aa-9991-e0968bce1633 to disappear Jan 4 13:45:04.733: INFO: Pod pod-projected-secrets-c57db37a-ea70-48aa-9991-e0968bce1633 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:45:04.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2863" for this suite. • [SLOW TEST:15.400 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":31,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:45:04.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 4 13:45:04.965: INFO: Waiting up to 5m0s for pod "downwardapi-volume-72ed6e47-432e-401f-b27a-30867153e2bc" in namespace "projected-9309" to be "success or failure" Jan 4 13:45:04.992: INFO: Pod "downwardapi-volume-72ed6e47-432e-401f-b27a-30867153e2bc": Phase="Pending", Reason="", readiness=false. Elapsed: 27.205122ms Jan 4 13:45:06.997: INFO: Pod "downwardapi-volume-72ed6e47-432e-401f-b27a-30867153e2bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03260386s Jan 4 13:45:09.003: INFO: Pod "downwardapi-volume-72ed6e47-432e-401f-b27a-30867153e2bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038546108s Jan 4 13:45:11.011: INFO: Pod "downwardapi-volume-72ed6e47-432e-401f-b27a-30867153e2bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045739011s Jan 4 13:45:13.015: INFO: Pod "downwardapi-volume-72ed6e47-432e-401f-b27a-30867153e2bc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050439736s Jan 4 13:45:15.019: INFO: Pod "downwardapi-volume-72ed6e47-432e-401f-b27a-30867153e2bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054541238s STEP: Saw pod success Jan 4 13:45:15.020: INFO: Pod "downwardapi-volume-72ed6e47-432e-401f-b27a-30867153e2bc" satisfied condition "success or failure" Jan 4 13:45:15.022: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-72ed6e47-432e-401f-b27a-30867153e2bc container client-container: STEP: delete the pod Jan 4 13:45:15.083: INFO: Waiting for pod downwardapi-volume-72ed6e47-432e-401f-b27a-30867153e2bc to disappear Jan 4 13:45:15.105: INFO: Pod downwardapi-volume-72ed6e47-432e-401f-b27a-30867153e2bc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:45:15.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9309" for this suite. • [SLOW TEST:10.366 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":40,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:45:15.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 13:45:25.915: INFO: Waiting up to 5m0s for pod "client-envvars-15131ade-de2c-4cc3-b5fb-611d760af9e8" in namespace "pods-9126" to be "success or failure" Jan 4 13:45:25.934: INFO: Pod "client-envvars-15131ade-de2c-4cc3-b5fb-611d760af9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.923779ms Jan 4 13:45:27.939: INFO: Pod "client-envvars-15131ade-de2c-4cc3-b5fb-611d760af9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02322907s Jan 4 13:45:29.945: INFO: Pod "client-envvars-15131ade-de2c-4cc3-b5fb-611d760af9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029467499s Jan 4 13:45:31.952: INFO: Pod "client-envvars-15131ade-de2c-4cc3-b5fb-611d760af9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036439442s Jan 4 13:45:33.956: INFO: Pod "client-envvars-15131ade-de2c-4cc3-b5fb-611d760af9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040870269s Jan 4 13:45:35.962: INFO: Pod "client-envvars-15131ade-de2c-4cc3-b5fb-611d760af9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.047174372s Jan 4 13:45:37.966: INFO: Pod "client-envvars-15131ade-de2c-4cc3-b5fb-611d760af9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.050331291s Jan 4 13:45:40.706: INFO: Pod "client-envvars-15131ade-de2c-4cc3-b5fb-611d760af9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.791090996s Jan 4 13:45:42.715: INFO: Pod "client-envvars-15131ade-de2c-4cc3-b5fb-611d760af9e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.799483102s STEP: Saw pod success Jan 4 13:45:42.715: INFO: Pod "client-envvars-15131ade-de2c-4cc3-b5fb-611d760af9e8" satisfied condition "success or failure" Jan 4 13:45:42.721: INFO: Trying to get logs from node jerma-node pod client-envvars-15131ade-de2c-4cc3-b5fb-611d760af9e8 container env3cont: STEP: delete the pod Jan 4 13:45:42.927: INFO: Waiting for pod client-envvars-15131ade-de2c-4cc3-b5fb-611d760af9e8 to disappear Jan 4 13:45:42.945: INFO: Pod client-envvars-15131ade-de2c-4cc3-b5fb-611d760af9e8 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:45:42.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9126" for this suite. • [SLOW TEST:27.858 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":78,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:45:42.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 4 13:45:44.113: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 4 13:45:46.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:45:48.150: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:45:50.128: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:45:52.128: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:45:54.129: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:45:56.130: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742344, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 4 13:45:59.231: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:45:59.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2570" for this suite. STEP: Destroying namespace "webhook-2570-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.229 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":6,"skipped":80,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:46:00.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jan 4 13:46:00.390: INFO: >>> kubeConfig: /root/.kube/config Jan 4 13:46:03.110: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:46:16.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7931" for this suite. • [SLOW TEST:17.257 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":7,"skipped":115,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:46:17.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jan 4 13:46:17.715: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Jan 4 13:46:18.564: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 4 13:46:20.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:46:22.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:46:24.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:46:26.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:46:28.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:46:30.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:46:32.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:46:34.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:46:37.012: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:46:38.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:46:40.760: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:46:42.788: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:46:44.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:46:46.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:46:48.755: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:46:50.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:46:52.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:46:54.755: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:46:56.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:46:58.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:47:00.762: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:47:02.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:47:04.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:47:06.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742378, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:47:09.643: INFO: Waited 872.654744ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:47:10.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5049" for this suite. • [SLOW TEST:53.023 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":8,"skipped":131,"failed":0} [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:47:10.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 13:47:10.874: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"df7dde18-6efa-4f11-8de8-6dfa2b590eca", Controller:(*bool)(0xc0008e6eea), BlockOwnerDeletion:(*bool)(0xc0008e6eeb)}} Jan 4 13:47:10.896: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6bdceacd-1165-4bd9-9cdc-a7b03834d65b", Controller:(*bool)(0xc0008e7076), BlockOwnerDeletion:(*bool)(0xc0008e7077)}} Jan 4 13:47:10.963: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"09b0cb8c-40fa-4acc-b933-6b1f76b773fd", Controller:(*bool)(0xc002cbab8a), BlockOwnerDeletion:(*bool)(0xc002cbab8b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:47:16.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5377" for this suite. • [SLOW TEST:5.687 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":9,"skipped":131,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:47:16.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jan 4 13:47:35.401: INFO: Successfully updated pod "labelsupdateef750fd5-8c11-49f9-8ddb-9bb948594746" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:47:39.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9548" for this suite. • [SLOW TEST:23.385 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":135,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:47:39.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 4 13:47:40.629: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 4 13:47:42.644: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:47:44.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:47:46.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:47:48.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:47:50.650: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:47:52.650: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742460, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 4 13:47:55.688: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:47:56.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5330" for this suite. STEP: Destroying namespace "webhook-5330-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.621 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":11,"skipped":140,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:47:56.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-20e079f6-9baa-4f97-9442-380662131896 STEP: Creating a pod to test consume secrets Jan 4 13:47:56.367: INFO: Waiting up to 5m0s for pod "pod-secrets-15904cdc-da6e-4c01-af35-08b1d67c9b7e" in namespace "secrets-5212" to be "success or failure" Jan 4 13:47:56.378: INFO: Pod "pod-secrets-15904cdc-da6e-4c01-af35-08b1d67c9b7e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.401196ms Jan 4 13:47:58.384: INFO: Pod "pod-secrets-15904cdc-da6e-4c01-af35-08b1d67c9b7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016802516s Jan 4 13:48:00.392: INFO: Pod "pod-secrets-15904cdc-da6e-4c01-af35-08b1d67c9b7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024470813s Jan 4 13:48:02.396: INFO: Pod "pod-secrets-15904cdc-da6e-4c01-af35-08b1d67c9b7e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029208127s Jan 4 13:48:04.405: INFO: Pod "pod-secrets-15904cdc-da6e-4c01-af35-08b1d67c9b7e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038004676s Jan 4 13:48:06.412: INFO: Pod "pod-secrets-15904cdc-da6e-4c01-af35-08b1d67c9b7e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.044410339s Jan 4 13:48:08.439: INFO: Pod "pod-secrets-15904cdc-da6e-4c01-af35-08b1d67c9b7e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.07135567s Jan 4 13:48:10.448: INFO: Pod "pod-secrets-15904cdc-da6e-4c01-af35-08b1d67c9b7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.080951729s STEP: Saw pod success Jan 4 13:48:10.449: INFO: Pod "pod-secrets-15904cdc-da6e-4c01-af35-08b1d67c9b7e" satisfied condition "success or failure" Jan 4 13:48:10.455: INFO: Trying to get logs from node jerma-node pod pod-secrets-15904cdc-da6e-4c01-af35-08b1d67c9b7e container secret-volume-test: STEP: delete the pod Jan 4 13:48:10.686: INFO: Waiting for pod pod-secrets-15904cdc-da6e-4c01-af35-08b1d67c9b7e to disappear Jan 4 13:48:10.696: INFO: Pod pod-secrets-15904cdc-da6e-4c01-af35-08b1d67c9b7e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:48:10.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5212" for this suite. • [SLOW TEST:14.528 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":151,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:48:10.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 4 13:48:11.916: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 4 13:48:13.930: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742491, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742491, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742492, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742491, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:48:15.953: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742491, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742491, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742492, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742491, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:48:17.940: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742491, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742491, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742492, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742491, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:48:19.943: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742491, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742491, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742492, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742491, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 4 13:48:23.030: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:48:23.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6069" for this suite. STEP: Destroying namespace "webhook-6069-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.112 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":13,"skipped":154,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:48:23.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 4 13:48:46.422: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 13:48:46.432: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 13:48:48.433: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 13:48:48.439: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 13:48:50.433: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 13:48:50.444: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 13:48:52.433: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 13:48:52.438: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 13:48:54.433: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 13:48:54.439: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 13:48:56.433: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 13:48:56.436: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 13:48:58.433: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 13:48:58.438: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 13:49:00.433: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 13:49:00.441: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 13:49:02.433: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 13:49:02.481: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 13:49:04.433: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 13:49:04.439: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:49:04.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6223" for this suite. • [SLOW TEST:40.705 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":159,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:49:04.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-809ecb7f-0fc2-4636-b5b6-52f5e351b3bd STEP: Creating a pod to test consume secrets Jan 4 13:49:04.676: INFO: Waiting up to 5m0s for pod "pod-secrets-0fb36925-804b-41cd-8158-70d631e0aa8d" in namespace "secrets-7370" to be "success or failure" Jan 4 13:49:04.693: INFO: Pod "pod-secrets-0fb36925-804b-41cd-8158-70d631e0aa8d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.759445ms Jan 4 13:49:06.700: INFO: Pod "pod-secrets-0fb36925-804b-41cd-8158-70d631e0aa8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024694083s Jan 4 13:49:08.717: INFO: Pod "pod-secrets-0fb36925-804b-41cd-8158-70d631e0aa8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041385671s Jan 4 13:49:10.726: INFO: Pod "pod-secrets-0fb36925-804b-41cd-8158-70d631e0aa8d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049960969s Jan 4 13:49:12.736: INFO: Pod "pod-secrets-0fb36925-804b-41cd-8158-70d631e0aa8d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060478021s Jan 4 13:49:14.743: INFO: Pod "pod-secrets-0fb36925-804b-41cd-8158-70d631e0aa8d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.067492136s Jan 4 13:49:16.753: INFO: Pod "pod-secrets-0fb36925-804b-41cd-8158-70d631e0aa8d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.076916634s Jan 4 13:49:18.776: INFO: Pod "pod-secrets-0fb36925-804b-41cd-8158-70d631e0aa8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.100259083s STEP: Saw pod success Jan 4 13:49:18.776: INFO: Pod "pod-secrets-0fb36925-804b-41cd-8158-70d631e0aa8d" satisfied condition "success or failure" Jan 4 13:49:18.779: INFO: Trying to get logs from node jerma-node pod pod-secrets-0fb36925-804b-41cd-8158-70d631e0aa8d container secret-volume-test: STEP: delete the pod Jan 4 13:49:18.912: INFO: Waiting for pod pod-secrets-0fb36925-804b-41cd-8158-70d631e0aa8d to disappear Jan 4 13:49:18.934: INFO: Pod pod-secrets-0fb36925-804b-41cd-8158-70d631e0aa8d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:49:18.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7370" for this suite. • [SLOW TEST:14.420 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":191,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:49:18.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2621 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2621;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2621 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2621;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2621.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2621.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2621.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2621.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2621.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2621.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2621.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2621.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2621.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2621.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2621.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2621.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2621.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 68.235.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.235.68_udp@PTR;check="$$(dig +tcp +noall +answer +search 68.235.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.235.68_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2621 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2621;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2621 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2621;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2621.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2621.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2621.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2621.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2621.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2621.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2621.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2621.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2621.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2621.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2621.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2621.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2621.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 68.235.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.235.68_udp@PTR;check="$$(dig +tcp +noall +answer +search 68.235.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.235.68_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 4 13:50:21.242: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:21.246: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:21.249: INFO: Unable to read wheezy_udp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:21.253: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:21.258: INFO: Unable to read wheezy_udp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:21.262: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:21.265: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:21.271: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:21.358: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:21.362: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:21.371: INFO: Unable to read jessie_udp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:21.375: INFO: Unable to read jessie_tcp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:21.378: INFO: Unable to read jessie_udp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:21.383: INFO: Unable to read jessie_tcp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:21.388: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:21.392: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:21.446: INFO: Lookups using dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2621 wheezy_tcp@dns-test-service.dns-2621 wheezy_udp@dns-test-service.dns-2621.svc wheezy_tcp@dns-test-service.dns-2621.svc wheezy_udp@_http._tcp.dns-test-service.dns-2621.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2621.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2621 jessie_tcp@dns-test-service.dns-2621 jessie_udp@dns-test-service.dns-2621.svc jessie_tcp@dns-test-service.dns-2621.svc jessie_udp@_http._tcp.dns-test-service.dns-2621.svc jessie_tcp@_http._tcp.dns-test-service.dns-2621.svc] Jan 4 13:50:26.458: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:26.462: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:26.466: INFO: Unable to read wheezy_udp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:26.470: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:26.474: INFO: Unable to read wheezy_udp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:26.477: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:26.480: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:26.483: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:26.519: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:26.522: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:26.526: INFO: Unable to read jessie_udp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:26.529: INFO: Unable to read jessie_tcp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:26.532: INFO: Unable to read jessie_udp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:26.536: INFO: Unable to read jessie_tcp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:26.539: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:26.543: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:26.565: INFO: Lookups using dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2621 wheezy_tcp@dns-test-service.dns-2621 wheezy_udp@dns-test-service.dns-2621.svc wheezy_tcp@dns-test-service.dns-2621.svc wheezy_udp@_http._tcp.dns-test-service.dns-2621.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2621.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2621 jessie_tcp@dns-test-service.dns-2621 jessie_udp@dns-test-service.dns-2621.svc jessie_tcp@dns-test-service.dns-2621.svc jessie_udp@_http._tcp.dns-test-service.dns-2621.svc jessie_tcp@_http._tcp.dns-test-service.dns-2621.svc] Jan 4 13:50:31.464: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:31.481: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:31.487: INFO: Unable to read wheezy_udp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:31.492: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:31.495: INFO: Unable to read wheezy_udp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:31.498: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:31.504: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:31.508: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:31.651: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:31.658: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:31.665: INFO: Unable to read jessie_udp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:31.669: INFO: Unable to read jessie_tcp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:31.673: INFO: Unable to read jessie_udp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:31.680: INFO: Unable to read jessie_tcp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:31.688: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:31.693: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:31.719: INFO: Lookups using dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2621 wheezy_tcp@dns-test-service.dns-2621 wheezy_udp@dns-test-service.dns-2621.svc wheezy_tcp@dns-test-service.dns-2621.svc wheezy_udp@_http._tcp.dns-test-service.dns-2621.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2621.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2621 jessie_tcp@dns-test-service.dns-2621 jessie_udp@dns-test-service.dns-2621.svc jessie_tcp@dns-test-service.dns-2621.svc jessie_udp@_http._tcp.dns-test-service.dns-2621.svc jessie_tcp@_http._tcp.dns-test-service.dns-2621.svc] Jan 4 13:50:36.474: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:36.486: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:36.501: INFO: Unable to read wheezy_udp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:36.513: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:36.521: INFO: Unable to read wheezy_udp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:36.528: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:36.534: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:36.538: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:36.665: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:36.676: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:36.682: INFO: Unable to read jessie_udp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:36.691: INFO: Unable to read jessie_tcp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:36.697: INFO: Unable to read jessie_udp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:36.701: INFO: Unable to read jessie_tcp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:36.711: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:36.722: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:36.789: INFO: Lookups using dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2621 wheezy_tcp@dns-test-service.dns-2621 wheezy_udp@dns-test-service.dns-2621.svc wheezy_tcp@dns-test-service.dns-2621.svc wheezy_udp@_http._tcp.dns-test-service.dns-2621.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2621.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2621 jessie_tcp@dns-test-service.dns-2621 jessie_udp@dns-test-service.dns-2621.svc jessie_tcp@dns-test-service.dns-2621.svc jessie_udp@_http._tcp.dns-test-service.dns-2621.svc jessie_tcp@_http._tcp.dns-test-service.dns-2621.svc] Jan 4 13:50:41.462: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:41.471: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:41.479: INFO: Unable to read wheezy_udp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:41.486: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:41.507: INFO: Unable to read wheezy_udp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:41.519: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:41.524: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:41.528: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:41.567: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:41.571: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:41.576: INFO: Unable to read jessie_udp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:41.580: INFO: Unable to read jessie_tcp@dns-test-service.dns-2621 from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:41.584: INFO: Unable to read jessie_udp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:41.591: INFO: Unable to read jessie_tcp@dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:41.600: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:41.612: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2621.svc from pod dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9: the server could not find the requested resource (get pods dns-test-63c1e165-5d98-425c-85a7-027155cddeb9) Jan 4 13:50:41.736: INFO: Lookups using dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2621 wheezy_tcp@dns-test-service.dns-2621 wheezy_udp@dns-test-service.dns-2621.svc wheezy_tcp@dns-test-service.dns-2621.svc wheezy_udp@_http._tcp.dns-test-service.dns-2621.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2621.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2621 jessie_tcp@dns-test-service.dns-2621 jessie_udp@dns-test-service.dns-2621.svc jessie_tcp@dns-test-service.dns-2621.svc jessie_udp@_http._tcp.dns-test-service.dns-2621.svc jessie_tcp@_http._tcp.dns-test-service.dns-2621.svc] Jan 4 13:50:46.608: INFO: DNS probes using dns-2621/dns-test-63c1e165-5d98-425c-85a7-027155cddeb9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:50:46.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2621" for this suite. • [SLOW TEST:87.907 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":16,"skipped":215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:50:46.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jan 4 13:50:46.948: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 4 13:50:46.960: INFO: Waiting for terminating namespaces to be deleted... Jan 4 13:50:46.963: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 4 13:50:47.000: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 4 13:50:47.000: INFO: Container kube-proxy ready: true, restart count 0 Jan 4 13:50:47.000: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 4 13:50:47.000: INFO: Container weave ready: true, restart count 1 Jan 4 13:50:47.000: INFO: Container weave-npc ready: true, restart count 0 Jan 4 13:50:47.000: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 4 13:50:47.023: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 4 13:50:47.023: INFO: Container kube-scheduler ready: true, restart count 1 Jan 4 13:50:47.023: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 4 13:50:47.023: INFO: Container coredns ready: true, restart count 0 Jan 4 13:50:47.023: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 4 13:50:47.023: INFO: Container coredns ready: true, restart count 0 Jan 4 13:50:47.023: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 4 13:50:47.023: INFO: Container kube-apiserver ready: true, restart count 1 Jan 4 13:50:47.023: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 4 13:50:47.023: INFO: Container kube-controller-manager ready: true, restart count 1 Jan 4 13:50:47.023: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 4 13:50:47.023: INFO: Container etcd ready: true, restart count 1 Jan 4 13:50:47.023: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 4 13:50:47.023: INFO: Container kube-proxy ready: true, restart count 0 Jan 4 13:50:47.023: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 4 13:50:47.023: INFO: Container weave ready: true, restart count 0 Jan 4 13:50:47.023: INFO: Container weave-npc ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-2dc1f119-fe2e-412c-b4d8-adedeae7b6c4 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-2dc1f119-fe2e-412c-b4d8-adedeae7b6c4 off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-2dc1f119-fe2e-412c-b4d8-adedeae7b6c4 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:51:11.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5167" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:24.423 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":17,"skipped":269,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:51:11.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 4 13:51:11.504: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16a032d6-e965-491a-b532-351e35190838" in namespace "downward-api-5121" to be "success or failure" Jan 4 13:51:11.526: INFO: Pod "downwardapi-volume-16a032d6-e965-491a-b532-351e35190838": Phase="Pending", Reason="", readiness=false. Elapsed: 22.109821ms Jan 4 13:51:13.543: INFO: Pod "downwardapi-volume-16a032d6-e965-491a-b532-351e35190838": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038918944s Jan 4 13:51:15.552: INFO: Pod "downwardapi-volume-16a032d6-e965-491a-b532-351e35190838": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048369258s Jan 4 13:51:17.644: INFO: Pod "downwardapi-volume-16a032d6-e965-491a-b532-351e35190838": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140124594s Jan 4 13:51:19.652: INFO: Pod "downwardapi-volume-16a032d6-e965-491a-b532-351e35190838": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147708507s Jan 4 13:51:21.662: INFO: Pod "downwardapi-volume-16a032d6-e965-491a-b532-351e35190838": Phase="Pending", Reason="", readiness=false. Elapsed: 10.157742146s Jan 4 13:51:23.669: INFO: Pod "downwardapi-volume-16a032d6-e965-491a-b532-351e35190838": Phase="Pending", Reason="", readiness=false. Elapsed: 12.164972545s Jan 4 13:51:25.673: INFO: Pod "downwardapi-volume-16a032d6-e965-491a-b532-351e35190838": Phase="Pending", Reason="", readiness=false. Elapsed: 14.168759895s Jan 4 13:51:27.682: INFO: Pod "downwardapi-volume-16a032d6-e965-491a-b532-351e35190838": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.177798913s STEP: Saw pod success Jan 4 13:51:27.682: INFO: Pod "downwardapi-volume-16a032d6-e965-491a-b532-351e35190838" satisfied condition "success or failure" Jan 4 13:51:27.685: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-16a032d6-e965-491a-b532-351e35190838 container client-container: STEP: delete the pod Jan 4 13:51:27.860: INFO: Waiting for pod downwardapi-volume-16a032d6-e965-491a-b532-351e35190838 to disappear Jan 4 13:51:27.865: INFO: Pod downwardapi-volume-16a032d6-e965-491a-b532-351e35190838 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:51:27.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5121" for this suite. • [SLOW TEST:16.596 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":275,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:51:27.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:51:45.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7649" for this suite. • [SLOW TEST:17.232 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":19,"skipped":281,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:51:45.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-61f8e688-5987-4954-9a3a-5795483bc12b STEP: Creating a pod to test consume configMaps Jan 4 13:51:45.192: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5554143a-50c5-4858-ba9f-b2c79a44125e" in namespace "projected-732" to be "success or failure" Jan 4 13:51:45.267: INFO: Pod "pod-projected-configmaps-5554143a-50c5-4858-ba9f-b2c79a44125e": Phase="Pending", Reason="", readiness=false. Elapsed: 74.566501ms Jan 4 13:51:47.277: INFO: Pod "pod-projected-configmaps-5554143a-50c5-4858-ba9f-b2c79a44125e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084059427s Jan 4 13:51:49.283: INFO: Pod "pod-projected-configmaps-5554143a-50c5-4858-ba9f-b2c79a44125e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090361805s Jan 4 13:51:51.287: INFO: Pod "pod-projected-configmaps-5554143a-50c5-4858-ba9f-b2c79a44125e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094895119s Jan 4 13:51:53.293: INFO: Pod "pod-projected-configmaps-5554143a-50c5-4858-ba9f-b2c79a44125e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101009739s Jan 4 13:51:55.369: INFO: Pod "pod-projected-configmaps-5554143a-50c5-4858-ba9f-b2c79a44125e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.176121498s STEP: Saw pod success Jan 4 13:51:55.369: INFO: Pod "pod-projected-configmaps-5554143a-50c5-4858-ba9f-b2c79a44125e" satisfied condition "success or failure" Jan 4 13:51:55.372: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-5554143a-50c5-4858-ba9f-b2c79a44125e container projected-configmap-volume-test: STEP: delete the pod Jan 4 13:51:55.565: INFO: Waiting for pod pod-projected-configmaps-5554143a-50c5-4858-ba9f-b2c79a44125e to disappear Jan 4 13:51:55.572: INFO: Pod pod-projected-configmaps-5554143a-50c5-4858-ba9f-b2c79a44125e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:51:55.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-732" for this suite. • [SLOW TEST:10.479 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":296,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:51:55.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jan 4 13:52:11.839: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jan 4 13:52:27.052: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:52:27.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2613" for this suite. • [SLOW TEST:31.514 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":21,"skipped":364,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:52:27.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 4 13:52:27.705: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 4 13:52:29.721: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742747, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742747, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742747, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742747, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:52:31.728: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742747, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742747, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742747, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742747, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:52:33.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742747, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742747, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742747, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742747, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 13:52:35.763: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742747, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742747, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742747, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742747, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 4 13:52:38.826: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:52:40.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-465" for this suite. STEP: Destroying namespace "webhook-465-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.506 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":22,"skipped":376,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:52:40.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-0469ff5f-330e-4798-a653-d383a439fac0 in namespace container-probe-4437 Jan 4 13:52:54.807: INFO: Started pod busybox-0469ff5f-330e-4798-a653-d383a439fac0 in namespace container-probe-4437 STEP: checking the pod's current state and verifying that restartCount is present Jan 4 13:52:54.809: INFO: Initial restart count of pod busybox-0469ff5f-330e-4798-a653-d383a439fac0 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:56:56.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4437" for this suite. • [SLOW TEST:255.704 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":410,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:56:56.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3593.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3593.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 4 13:57:14.702: INFO: DNS probes using dns-3593/dns-test-2b0aee05-b643-442d-8777-bcb41fe7ffdb succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:57:14.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3593" for this suite. • [SLOW TEST:18.461 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":24,"skipped":433,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:57:14.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5882.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5882.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5882.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5882.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5882.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5882.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5882.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5882.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5882.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5882.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 4 13:57:32.047: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:32.054: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:32.070: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:32.075: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:32.105: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:32.110: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:32.114: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:32.119: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:32.148: INFO: Lookups using dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5882.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5882.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local jessie_udp@dns-test-service-2.dns-5882.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5882.svc.cluster.local] Jan 4 13:57:37.170: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:37.176: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:37.179: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:37.183: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:37.193: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:37.196: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:37.199: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:37.202: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:37.208: INFO: Lookups using dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5882.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5882.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local jessie_udp@dns-test-service-2.dns-5882.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5882.svc.cluster.local] Jan 4 13:57:42.173: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:42.202: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:42.214: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:42.220: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:42.230: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:42.233: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:42.236: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:42.239: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:42.250: INFO: Lookups using dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5882.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5882.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local jessie_udp@dns-test-service-2.dns-5882.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5882.svc.cluster.local] Jan 4 13:57:47.155: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:47.158: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:47.161: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:47.164: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:47.173: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:47.175: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:47.178: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:47.182: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:47.189: INFO: Lookups using dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5882.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5882.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local jessie_udp@dns-test-service-2.dns-5882.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5882.svc.cluster.local] Jan 4 13:57:52.170: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:52.180: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:52.188: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:52.200: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:52.234: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:52.243: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:52.258: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:52.268: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:52.313: INFO: Lookups using dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5882.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5882.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local jessie_udp@dns-test-service-2.dns-5882.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5882.svc.cluster.local] Jan 4 13:57:57.156: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:57.161: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:57.166: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:57.169: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:57.251: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:57.255: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:57.257: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:57.260: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5882.svc.cluster.local from pod dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66: the server could not find the requested resource (get pods dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66) Jan 4 13:57:57.266: INFO: Lookups using dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5882.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5882.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5882.svc.cluster.local jessie_udp@dns-test-service-2.dns-5882.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5882.svc.cluster.local] Jan 4 13:58:02.340: INFO: DNS probes using dns-5882/dns-test-417eec55-ee19-4b75-81b0-f13cb02a2d66 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:58:02.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5882" for this suite. • [SLOW TEST:47.837 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":25,"skipped":489,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:58:02.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Jan 4 13:58:02.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 4 13:58:06.081: INFO: stderr: "" Jan 4 13:58:06.081: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:58:06.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8111" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":26,"skipped":491,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:58:06.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-2795d176-607c-4c2d-b732-79a741e42872 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:58:06.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8191" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":27,"skipped":503,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:58:06.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7258 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7258 STEP: Creating statefulset with conflicting port in namespace statefulset-7258 STEP: Waiting until pod test-pod will start running in namespace statefulset-7258 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7258 Jan 4 13:58:46.606: INFO: Observed stateful pod in namespace: statefulset-7258, name: ss-0, uid: fca98900-abe6-4370-905a-a055bd50baaa, status phase: Pending. Waiting for statefulset controller to delete. Jan 4 13:58:53.093: INFO: Observed stateful pod in namespace: statefulset-7258, name: ss-0, uid: fca98900-abe6-4370-905a-a055bd50baaa, status phase: Failed. Waiting for statefulset controller to delete. Jan 4 13:58:53.112: INFO: Observed stateful pod in namespace: statefulset-7258, name: ss-0, uid: fca98900-abe6-4370-905a-a055bd50baaa, status phase: Failed. Waiting for statefulset controller to delete. Jan 4 13:58:53.129: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7258 STEP: Removing pod with conflicting port in namespace statefulset-7258 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7258 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 4 13:59:07.473: INFO: Deleting all statefulset in ns statefulset-7258 Jan 4 13:59:07.479: INFO: Scaling statefulset ss to 0 Jan 4 13:59:27.532: INFO: Waiting for statefulset status.replicas updated to 0 Jan 4 13:59:27.536: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:59:27.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7258" for this suite. • [SLOW TEST:81.273 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":28,"skipped":514,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:59:27.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Jan 4 13:59:27.687: INFO: Waiting up to 5m0s for pod "var-expansion-2e399079-36c4-4845-87c2-a1a4bbc5ab79" in namespace "var-expansion-3688" to be "success or failure" Jan 4 13:59:27.690: INFO: Pod "var-expansion-2e399079-36c4-4845-87c2-a1a4bbc5ab79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130395ms Jan 4 13:59:29.698: INFO: Pod "var-expansion-2e399079-36c4-4845-87c2-a1a4bbc5ab79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010698711s Jan 4 13:59:31.706: INFO: Pod "var-expansion-2e399079-36c4-4845-87c2-a1a4bbc5ab79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018344627s Jan 4 13:59:33.729: INFO: Pod "var-expansion-2e399079-36c4-4845-87c2-a1a4bbc5ab79": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041886131s Jan 4 13:59:35.762: INFO: Pod "var-expansion-2e399079-36c4-4845-87c2-a1a4bbc5ab79": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074670577s Jan 4 13:59:37.767: INFO: Pod "var-expansion-2e399079-36c4-4845-87c2-a1a4bbc5ab79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079591425s STEP: Saw pod success Jan 4 13:59:37.767: INFO: Pod "var-expansion-2e399079-36c4-4845-87c2-a1a4bbc5ab79" satisfied condition "success or failure" Jan 4 13:59:37.769: INFO: Trying to get logs from node jerma-node pod var-expansion-2e399079-36c4-4845-87c2-a1a4bbc5ab79 container dapi-container: STEP: delete the pod Jan 4 13:59:37.862: INFO: Waiting for pod var-expansion-2e399079-36c4-4845-87c2-a1a4bbc5ab79 to disappear Jan 4 13:59:37.866: INFO: Pod var-expansion-2e399079-36c4-4845-87c2-a1a4bbc5ab79 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:59:37.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3688" for this suite. • [SLOW TEST:10.310 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":532,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:59:37.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:59:38.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-841" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":30,"skipped":541,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:59:38.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jan 4 13:59:52.676: INFO: Successfully updated pod "adopt-release-cfkgj" STEP: Checking that the Job readopts the Pod Jan 4 13:59:52.676: INFO: Waiting up to 15m0s for pod "adopt-release-cfkgj" in namespace "job-8898" to be "adopted" Jan 4 13:59:52.683: INFO: Pod "adopt-release-cfkgj": Phase="Running", Reason="", readiness=true. Elapsed: 7.226517ms Jan 4 13:59:54.690: INFO: Pod "adopt-release-cfkgj": Phase="Running", Reason="", readiness=true. Elapsed: 2.013809965s Jan 4 13:59:54.690: INFO: Pod "adopt-release-cfkgj" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jan 4 13:59:55.201: INFO: Successfully updated pod "adopt-release-cfkgj" STEP: Checking that the Job releases the Pod Jan 4 13:59:55.201: INFO: Waiting up to 15m0s for pod "adopt-release-cfkgj" in namespace "job-8898" to be "released" Jan 4 13:59:55.301: INFO: Pod "adopt-release-cfkgj": Phase="Running", Reason="", readiness=true. Elapsed: 100.02245ms Jan 4 13:59:55.301: INFO: Pod "adopt-release-cfkgj" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 13:59:55.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8898" for this suite. • [SLOW TEST:17.342 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":31,"skipped":570,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 13:59:55.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-8bc46808-87e3-4ffd-bdf9-a3980494929f STEP: Creating secret with name s-test-opt-upd-3c28fb18-0347-4444-b441-2fc5e24f22dc STEP: Creating the pod STEP: Deleting secret s-test-opt-del-8bc46808-87e3-4ffd-bdf9-a3980494929f STEP: Updating secret s-test-opt-upd-3c28fb18-0347-4444-b441-2fc5e24f22dc STEP: Creating secret with name s-test-opt-create-5d9d20ab-42dd-4cb2-b38a-39da9b24c085 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:01:15.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5736" for this suite. • [SLOW TEST:79.921 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":581,"failed":0} SSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:01:15.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 4 14:01:27.908: INFO: Successfully updated pod "pod-update-d781dd5a-d3c9-420f-9711-3e90017431cf" STEP: verifying the updated pod is in kubernetes Jan 4 14:01:27.989: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:01:27.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7914" for this suite. • [SLOW TEST:12.691 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":585,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:01:28.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 4 14:01:28.176: INFO: Waiting up to 5m0s for pod "downwardapi-volume-010686c7-abd4-4a88-b870-0b7840e62895" in namespace "projected-2790" to be "success or failure" Jan 4 14:01:28.192: INFO: Pod "downwardapi-volume-010686c7-abd4-4a88-b870-0b7840e62895": Phase="Pending", Reason="", readiness=false. Elapsed: 15.92619ms Jan 4 14:01:30.205: INFO: Pod "downwardapi-volume-010686c7-abd4-4a88-b870-0b7840e62895": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028493826s Jan 4 14:01:32.219: INFO: Pod "downwardapi-volume-010686c7-abd4-4a88-b870-0b7840e62895": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042629274s Jan 4 14:01:34.225: INFO: Pod "downwardapi-volume-010686c7-abd4-4a88-b870-0b7840e62895": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048612311s Jan 4 14:01:36.233: INFO: Pod "downwardapi-volume-010686c7-abd4-4a88-b870-0b7840e62895": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057021129s Jan 4 14:01:38.237: INFO: Pod "downwardapi-volume-010686c7-abd4-4a88-b870-0b7840e62895": Phase="Pending", Reason="", readiness=false. Elapsed: 10.060857909s Jan 4 14:01:40.242: INFO: Pod "downwardapi-volume-010686c7-abd4-4a88-b870-0b7840e62895": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.065218095s STEP: Saw pod success Jan 4 14:01:40.242: INFO: Pod "downwardapi-volume-010686c7-abd4-4a88-b870-0b7840e62895" satisfied condition "success or failure" Jan 4 14:01:40.244: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-010686c7-abd4-4a88-b870-0b7840e62895 container client-container: STEP: delete the pod Jan 4 14:01:40.365: INFO: Waiting for pod downwardapi-volume-010686c7-abd4-4a88-b870-0b7840e62895 to disappear Jan 4 14:01:40.383: INFO: Pod downwardapi-volume-010686c7-abd4-4a88-b870-0b7840e62895 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:01:40.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2790" for this suite. • [SLOW TEST:12.397 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":645,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:01:40.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 4 14:01:40.524: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b1b76bab-51d1-4fc7-b826-42d7dc483a6b" in namespace "downward-api-6547" to be "success or failure" Jan 4 14:01:40.538: INFO: Pod "downwardapi-volume-b1b76bab-51d1-4fc7-b826-42d7dc483a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.891714ms Jan 4 14:01:42.543: INFO: Pod "downwardapi-volume-b1b76bab-51d1-4fc7-b826-42d7dc483a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019365889s Jan 4 14:01:44.551: INFO: Pod "downwardapi-volume-b1b76bab-51d1-4fc7-b826-42d7dc483a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027094511s Jan 4 14:01:46.565: INFO: Pod "downwardapi-volume-b1b76bab-51d1-4fc7-b826-42d7dc483a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041426732s Jan 4 14:01:48.579: INFO: Pod "downwardapi-volume-b1b76bab-51d1-4fc7-b826-42d7dc483a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05536432s Jan 4 14:01:50.692: INFO: Pod "downwardapi-volume-b1b76bab-51d1-4fc7-b826-42d7dc483a6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.168204139s STEP: Saw pod success Jan 4 14:01:50.692: INFO: Pod "downwardapi-volume-b1b76bab-51d1-4fc7-b826-42d7dc483a6b" satisfied condition "success or failure" Jan 4 14:01:50.696: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-b1b76bab-51d1-4fc7-b826-42d7dc483a6b container client-container: STEP: delete the pod Jan 4 14:01:51.136: INFO: Waiting for pod downwardapi-volume-b1b76bab-51d1-4fc7-b826-42d7dc483a6b to disappear Jan 4 14:01:51.140: INFO: Pod downwardapi-volume-b1b76bab-51d1-4fc7-b826-42d7dc483a6b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:01:51.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6547" for this suite. • [SLOW TEST:10.747 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":708,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:01:51.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:01:51.471: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:01:52.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4543" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":36,"skipped":760,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:01:52.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-0a081c0e-6e14-4089-9037-8092d8189ecd STEP: Creating a pod to test consume configMaps Jan 4 14:01:52.473: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-31677e1f-13b3-429d-865f-0d935f96a96f" in namespace "projected-1485" to be "success or failure" Jan 4 14:01:52.482: INFO: Pod "pod-projected-configmaps-31677e1f-13b3-429d-865f-0d935f96a96f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.35862ms Jan 4 14:01:54.497: INFO: Pod "pod-projected-configmaps-31677e1f-13b3-429d-865f-0d935f96a96f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023141559s Jan 4 14:01:56.504: INFO: Pod "pod-projected-configmaps-31677e1f-13b3-429d-865f-0d935f96a96f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03088496s Jan 4 14:01:58.512: INFO: Pod "pod-projected-configmaps-31677e1f-13b3-429d-865f-0d935f96a96f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039072135s Jan 4 14:02:00.520: INFO: Pod "pod-projected-configmaps-31677e1f-13b3-429d-865f-0d935f96a96f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046805879s Jan 4 14:02:02.529: INFO: Pod "pod-projected-configmaps-31677e1f-13b3-429d-865f-0d935f96a96f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055466583s STEP: Saw pod success Jan 4 14:02:02.529: INFO: Pod "pod-projected-configmaps-31677e1f-13b3-429d-865f-0d935f96a96f" satisfied condition "success or failure" Jan 4 14:02:02.536: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-31677e1f-13b3-429d-865f-0d935f96a96f container projected-configmap-volume-test: STEP: delete the pod Jan 4 14:02:02.609: INFO: Waiting for pod pod-projected-configmaps-31677e1f-13b3-429d-865f-0d935f96a96f to disappear Jan 4 14:02:02.614: INFO: Pod pod-projected-configmaps-31677e1f-13b3-429d-865f-0d935f96a96f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:02:02.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1485" for this suite. • [SLOW TEST:10.415 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":763,"failed":0} [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:02:02.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:02:02.681: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 4 14:02:07.722: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 4 14:02:27.744: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 4 14:02:29.763: INFO: Creating deployment "test-rollover-deployment" Jan 4 14:02:29.786: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 4 14:02:31.800: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 4 14:02:31.811: INFO: Ensure that both replica sets have 1 created replica Jan 4 14:02:31.819: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 4 14:02:31.832: INFO: Updating deployment test-rollover-deployment Jan 4 14:02:31.832: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 4 14:02:33.906: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 4 14:02:33.913: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 4 14:02:33.923: INFO: all replica sets need to contain the pod-template-hash label Jan 4 14:02:33.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743352, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:02:35.930: INFO: all replica sets need to contain the pod-template-hash label Jan 4 14:02:35.930: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743352, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:02:37.938: INFO: all replica sets need to contain the pod-template-hash label Jan 4 14:02:37.938: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743352, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:02:39.954: INFO: all replica sets need to contain the pod-template-hash label Jan 4 14:02:39.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743352, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:02:41.933: INFO: all replica sets need to contain the pod-template-hash label Jan 4 14:02:41.933: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743360, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:02:43.934: INFO: all replica sets need to contain the pod-template-hash label Jan 4 14:02:43.935: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743360, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:02:45.935: INFO: all replica sets need to contain the pod-template-hash label Jan 4 14:02:45.935: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743360, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:02:47.933: INFO: all replica sets need to contain the pod-template-hash label Jan 4 14:02:47.933: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743360, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:02:49.942: INFO: all replica sets need to contain the pod-template-hash label Jan 4 14:02:49.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743360, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743349, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:02:51.931: INFO: Jan 4 14:02:51.931: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jan 4 14:02:51.939: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-5643 /apis/apps/v1/namespaces/deployment-5643/deployments/test-rollover-deployment a9d1719a-88c6-465e-b91f-41cf97b4523a 23650 2 2020-01-04 14:02:29 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000beba68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-04 14:02:29 +0000 UTC,LastTransitionTime:2020-01-04 14:02:29 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-01-04 14:02:50 +0000 UTC,LastTransitionTime:2020-01-04 14:02:29 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 4 14:02:51.943: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-5643 /apis/apps/v1/namespaces/deployment-5643/replicasets/test-rollover-deployment-574d6dfbff 73ef8bfe-4511-4342-99f0-e782725722b1 23641 2 2020-01-04 14:02:31 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment a9d1719a-88c6-465e-b91f-41cf97b4523a 0xc002774fe7 0xc002774fe8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002775068 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 4 14:02:51.943: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 4 14:02:51.943: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5643 /apis/apps/v1/namespaces/deployment-5643/replicasets/test-rollover-controller 93b10f87-eef8-47e0-bf35-cda3c8a405db 23649 2 2020-01-04 14:02:02 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment a9d1719a-88c6-465e-b91f-41cf97b4523a 0xc002774f17 0xc002774f18}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002774f78 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 4 14:02:51.943: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-5643 /apis/apps/v1/namespaces/deployment-5643/replicasets/test-rollover-deployment-f6c94f66c 3638ef21-a655-4c98-b466-0f70d70c4663 23587 2 2020-01-04 14:02:29 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment a9d1719a-88c6-465e-b91f-41cf97b4523a 0xc0027751c0 0xc0027751c1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027752c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 4 14:02:51.946: INFO: Pod "test-rollover-deployment-574d6dfbff-m752j" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-m752j test-rollover-deployment-574d6dfbff- deployment-5643 /api/v1/namespaces/deployment-5643/pods/test-rollover-deployment-574d6dfbff-m752j 44274908-3fa9-42b0-9fe1-b4ebf7422ac7 23615 0 2020-01-04 14:02:31 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 73ef8bfe-4511-4342-99f0-e782725722b1 0xc002775bc7 0xc002775bc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sjpvs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sjpvs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sjpvs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:02:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:02:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:02:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:02:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-04 14:02:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-04 14:02:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://f4cf2a37f4dcee1259160ff2c0c0a678fc2e7d3155b1a20592d5be38828ebd31,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:02:51.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5643" for this suite. • [SLOW TEST:49.333 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":38,"skipped":763,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:02:51.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jan 4 14:03:04.131: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5741 PodName:pod-sharedvolume-69edadf3-6dc3-485b-97ac-5a4422cbacdb ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 14:03:04.131: INFO: >>> kubeConfig: /root/.kube/config I0104 14:03:04.256941 9 log.go:172] (0xc002597ce0) (0xc000c21ea0) Create stream I0104 14:03:04.257075 9 log.go:172] (0xc002597ce0) (0xc000c21ea0) Stream added, broadcasting: 1 I0104 14:03:04.266178 9 log.go:172] (0xc002597ce0) Reply frame received for 1 I0104 14:03:04.266230 9 log.go:172] (0xc002597ce0) (0xc0013c21e0) Create stream I0104 14:03:04.266245 9 log.go:172] (0xc002597ce0) (0xc0013c21e0) Stream added, broadcasting: 3 I0104 14:03:04.268181 9 log.go:172] (0xc002597ce0) Reply frame received for 3 I0104 14:03:04.268210 9 log.go:172] (0xc002597ce0) (0xc0002d6be0) Create stream I0104 14:03:04.268221 9 log.go:172] (0xc002597ce0) (0xc0002d6be0) Stream added, broadcasting: 5 I0104 14:03:04.273472 9 log.go:172] (0xc002597ce0) Reply frame received for 5 I0104 14:03:04.489220 9 log.go:172] (0xc002597ce0) Data frame received for 3 I0104 14:03:04.489304 9 log.go:172] (0xc0013c21e0) (3) Data frame handling I0104 14:03:04.489355 9 log.go:172] (0xc0013c21e0) (3) Data frame sent I0104 14:03:04.694426 9 log.go:172] (0xc002597ce0) (0xc0013c21e0) Stream removed, broadcasting: 3 I0104 14:03:04.694600 9 log.go:172] (0xc002597ce0) Data frame received for 1 I0104 14:03:04.694638 9 log.go:172] (0xc002597ce0) (0xc0002d6be0) Stream removed, broadcasting: 5 I0104 14:03:04.694674 9 log.go:172] (0xc000c21ea0) (1) Data frame handling I0104 14:03:04.694694 9 log.go:172] (0xc000c21ea0) (1) Data frame sent I0104 14:03:04.694705 9 log.go:172] (0xc002597ce0) (0xc000c21ea0) Stream removed, broadcasting: 1 I0104 14:03:04.694719 9 log.go:172] (0xc002597ce0) Go away received I0104 14:03:04.695727 9 log.go:172] (0xc002597ce0) (0xc000c21ea0) Stream removed, broadcasting: 1 I0104 14:03:04.695767 9 log.go:172] (0xc002597ce0) (0xc0013c21e0) Stream removed, broadcasting: 3 I0104 14:03:04.695791 9 log.go:172] (0xc002597ce0) (0xc0002d6be0) Stream removed, broadcasting: 5 Jan 4 14:03:04.695: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:03:04.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5741" for this suite. • [SLOW TEST:12.773 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":39,"skipped":814,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:03:04.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jan 4 14:03:11.430: INFO: Successfully updated pod "annotationupdate58eace05-3519-455b-907e-f873fb761c81" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:03:15.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1765" for this suite. • [SLOW TEST:11.025 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":822,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:03:15.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Jan 4 14:03:15.884: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6891" to be "success or failure" Jan 4 14:03:15.895: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.644656ms Jan 4 14:03:17.909: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024714783s Jan 4 14:03:19.931: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046491958s Jan 4 14:03:21.966: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081934509s Jan 4 14:03:24.026: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.141841662s Jan 4 14:03:26.055: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.170862272s Jan 4 14:03:28.063: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.178501647s STEP: Saw pod success Jan 4 14:03:28.063: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 4 14:03:28.068: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 4 14:03:28.164: INFO: Waiting for pod pod-host-path-test to disappear Jan 4 14:03:28.173: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:03:28.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6891" for this suite. • [SLOW TEST:12.448 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":885,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:03:28.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:03:36.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5849" for this suite. • [SLOW TEST:8.162 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":919,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:03:36.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:03:36.486: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 4 14:03:37.643: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:03:38.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7655" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":43,"skipped":921,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:03:38.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:03:39.191: INFO: Creating deployment "test-recreate-deployment" Jan 4 14:03:39.332: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 4 14:03:39.497: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 4 14:03:42.649: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 4 14:03:42.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:03:44.769: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:03:47.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:03:48.670: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:03:50.666: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:03:52.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713743419, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:03:54.668: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 4 14:03:54.686: INFO: Updating deployment test-recreate-deployment Jan 4 14:03:54.687: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jan 4 14:03:55.256: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-5353 /apis/apps/v1/namespaces/deployment-5353/deployments/test-recreate-deployment 36a5004b-df9f-4e2a-bb30-d9a2eeeafe10 24045 2 2020-01-04 14:03:39 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002967ad8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-04 14:03:55 +0000 UTC,LastTransitionTime:2020-01-04 14:03:55 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-01-04 14:03:55 +0000 UTC,LastTransitionTime:2020-01-04 14:03:39 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jan 4 14:03:55.259: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-5353 /apis/apps/v1/namespaces/deployment-5353/replicasets/test-recreate-deployment-5f94c574ff ca437aaa-7d61-449f-951b-dd7a5ab4d488 24043 1 2020-01-04 14:03:55 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 36a5004b-df9f-4e2a-bb30-d9a2eeeafe10 0xc000b07967 0xc000b07968}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000b079c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 4 14:03:55.259: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 4 14:03:55.259: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-5353 /apis/apps/v1/namespaces/deployment-5353/replicasets/test-recreate-deployment-799c574856 d877b60c-74e6-486c-820c-5fb79cb2f822 24032 2 2020-01-04 14:03:39 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 36a5004b-df9f-4e2a-bb30-d9a2eeeafe10 0xc000b07a57 0xc000b07a58}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000b07ad8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 4 14:03:55.262: INFO: Pod "test-recreate-deployment-5f94c574ff-stgr4" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-stgr4 test-recreate-deployment-5f94c574ff- deployment-5353 /api/v1/namespaces/deployment-5353/pods/test-recreate-deployment-5f94c574ff-stgr4 ce829b9f-59c5-4b04-ab27-d49ee2495681 24044 0 2020-01-04 14:03:55 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff ca437aaa-7d61-449f-951b-dd7a5ab4d488 0xc004ee6157 0xc004ee6158}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nhk2x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nhk2x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nhk2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:03:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:03:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:03:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:03:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-04 14:03:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:03:55.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5353" for this suite. • [SLOW TEST:16.296 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":44,"skipped":939,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:03:55.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1672 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 4 14:03:55.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7333' Jan 4 14:03:55.598: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 4 14:03:55.598: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Jan 4 14:03:55.650: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jan 4 14:03:55.812: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jan 4 14:03:55.843: INFO: scanned /root for discovery docs: Jan 4 14:03:55.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7333' Jan 4 14:04:38.562: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 4 14:04:38.562: INFO: stdout: "Created e2e-test-httpd-rc-a8210f60ea998dcce5cc7df4bf085262\nScaling up e2e-test-httpd-rc-a8210f60ea998dcce5cc7df4bf085262 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-a8210f60ea998dcce5cc7df4bf085262 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-a8210f60ea998dcce5cc7df4bf085262 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Jan 4 14:04:38.563: INFO: stdout: "Created e2e-test-httpd-rc-a8210f60ea998dcce5cc7df4bf085262\nScaling up e2e-test-httpd-rc-a8210f60ea998dcce5cc7df4bf085262 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-a8210f60ea998dcce5cc7df4bf085262 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-a8210f60ea998dcce5cc7df4bf085262 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Jan 4 14:04:38.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7333' Jan 4 14:04:38.697: INFO: stderr: "" Jan 4 14:04:38.697: INFO: stdout: "e2e-test-httpd-rc-a8210f60ea998dcce5cc7df4bf085262-dc29m e2e-test-httpd-rc-r59mc " STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2 Jan 4 14:04:43.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7333' Jan 4 14:04:43.856: INFO: stderr: "" Jan 4 14:04:43.856: INFO: stdout: "e2e-test-httpd-rc-a8210f60ea998dcce5cc7df4bf085262-dc29m " Jan 4 14:04:43.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-a8210f60ea998dcce5cc7df4bf085262-dc29m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7333' Jan 4 14:04:44.032: INFO: stderr: "" Jan 4 14:04:44.032: INFO: stdout: "true" Jan 4 14:04:44.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-a8210f60ea998dcce5cc7df4bf085262-dc29m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7333' Jan 4 14:04:44.131: INFO: stderr: "" Jan 4 14:04:44.131: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Jan 4 14:04:44.131: INFO: e2e-test-httpd-rc-a8210f60ea998dcce5cc7df4bf085262-dc29m is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1678 Jan 4 14:04:44.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7333' Jan 4 14:04:44.221: INFO: stderr: "" Jan 4 14:04:44.222: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:04:44.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7333" for this suite. • [SLOW TEST:49.006 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1667 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":45,"skipped":962,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:04:44.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-1734 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 4 14:04:44.339: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 4 14:05:20.545: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1734 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 14:05:20.545: INFO: >>> kubeConfig: /root/.kube/config I0104 14:05:20.682833 9 log.go:172] (0xc0028c8160) (0xc000ce50e0) Create stream I0104 14:05:20.683005 9 log.go:172] (0xc0028c8160) (0xc000ce50e0) Stream added, broadcasting: 1 I0104 14:05:20.698473 9 log.go:172] (0xc0028c8160) Reply frame received for 1 I0104 14:05:20.698576 9 log.go:172] (0xc0028c8160) (0xc000ce5360) Create stream I0104 14:05:20.698584 9 log.go:172] (0xc0028c8160) (0xc000ce5360) Stream added, broadcasting: 3 I0104 14:05:20.701170 9 log.go:172] (0xc0028c8160) Reply frame received for 3 I0104 14:05:20.701198 9 log.go:172] (0xc0028c8160) (0xc0010bf540) Create stream I0104 14:05:20.701203 9 log.go:172] (0xc0028c8160) (0xc0010bf540) Stream added, broadcasting: 5 I0104 14:05:20.703398 9 log.go:172] (0xc0028c8160) Reply frame received for 5 I0104 14:05:20.875938 9 log.go:172] (0xc0028c8160) Data frame received for 3 I0104 14:05:20.875983 9 log.go:172] (0xc000ce5360) (3) Data frame handling I0104 14:05:20.876023 9 log.go:172] (0xc000ce5360) (3) Data frame sent I0104 14:05:21.059122 9 log.go:172] (0xc0028c8160) Data frame received for 1 I0104 14:05:21.059199 9 log.go:172] (0xc000ce50e0) (1) Data frame handling I0104 14:05:21.059218 9 log.go:172] (0xc000ce50e0) (1) Data frame sent I0104 14:05:21.059227 9 log.go:172] (0xc0028c8160) (0xc000ce50e0) Stream removed, broadcasting: 1 I0104 14:05:21.059610 9 log.go:172] (0xc0028c8160) (0xc000ce5360) Stream removed, broadcasting: 3 I0104 14:05:21.059640 9 log.go:172] (0xc0028c8160) (0xc0010bf540) Stream removed, broadcasting: 5 I0104 14:05:21.059692 9 log.go:172] (0xc0028c8160) Go away received I0104 14:05:21.059727 9 log.go:172] (0xc0028c8160) (0xc000ce50e0) Stream removed, broadcasting: 1 I0104 14:05:21.059745 9 log.go:172] (0xc0028c8160) (0xc000ce5360) Stream removed, broadcasting: 3 I0104 14:05:21.059756 9 log.go:172] (0xc0028c8160) (0xc0010bf540) Stream removed, broadcasting: 5 Jan 4 14:05:21.059: INFO: Found all expected endpoints: [netserver-0] Jan 4 14:05:21.064: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1734 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 14:05:21.064: INFO: >>> kubeConfig: /root/.kube/config I0104 14:05:21.137827 9 log.go:172] (0xc005c9a370) (0xc000681b80) Create stream I0104 14:05:21.137952 9 log.go:172] (0xc005c9a370) (0xc000681b80) Stream added, broadcasting: 1 I0104 14:05:21.148904 9 log.go:172] (0xc005c9a370) Reply frame received for 1 I0104 14:05:21.148970 9 log.go:172] (0xc005c9a370) (0xc0017d1b80) Create stream I0104 14:05:21.148979 9 log.go:172] (0xc005c9a370) (0xc0017d1b80) Stream added, broadcasting: 3 I0104 14:05:21.154269 9 log.go:172] (0xc005c9a370) Reply frame received for 3 I0104 14:05:21.154339 9 log.go:172] (0xc005c9a370) (0xc00064c820) Create stream I0104 14:05:21.154372 9 log.go:172] (0xc005c9a370) (0xc00064c820) Stream added, broadcasting: 5 I0104 14:05:21.159381 9 log.go:172] (0xc005c9a370) Reply frame received for 5 I0104 14:05:21.271676 9 log.go:172] (0xc005c9a370) Data frame received for 3 I0104 14:05:21.271718 9 log.go:172] (0xc0017d1b80) (3) Data frame handling I0104 14:05:21.271732 9 log.go:172] (0xc0017d1b80) (3) Data frame sent I0104 14:05:21.395044 9 log.go:172] (0xc005c9a370) (0xc0017d1b80) Stream removed, broadcasting: 3 I0104 14:05:21.395356 9 log.go:172] (0xc005c9a370) (0xc00064c820) Stream removed, broadcasting: 5 I0104 14:05:21.395418 9 log.go:172] (0xc005c9a370) Data frame received for 1 I0104 14:05:21.395450 9 log.go:172] (0xc000681b80) (1) Data frame handling I0104 14:05:21.395478 9 log.go:172] (0xc000681b80) (1) Data frame sent I0104 14:05:21.395503 9 log.go:172] (0xc005c9a370) (0xc000681b80) Stream removed, broadcasting: 1 I0104 14:05:21.395611 9 log.go:172] (0xc005c9a370) (0xc000681b80) Stream removed, broadcasting: 1 I0104 14:05:21.395639 9 log.go:172] (0xc005c9a370) (0xc0017d1b80) Stream removed, broadcasting: 3 I0104 14:05:21.395663 9 log.go:172] (0xc005c9a370) (0xc00064c820) Stream removed, broadcasting: 5 Jan 4 14:05:21.396: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:05:21.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1734" for this suite. • [SLOW TEST:37.132 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":964,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:05:21.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444 STEP: creating an pod Jan 4 14:05:21.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-1689 -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 4 14:05:21.931: INFO: stderr: "" Jan 4 14:05:21.931: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Jan 4 14:05:21.931: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 4 14:05:21.932: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1689" to be "running and ready, or succeeded" Jan 4 14:05:21.944: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 12.051516ms Jan 4 14:05:23.952: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0206028s Jan 4 14:05:25.960: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028772216s Jan 4 14:05:27.987: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05524368s Jan 4 14:05:30.067: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.13538527s Jan 4 14:05:32.074: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.142289897s Jan 4 14:05:34.081: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 12.148984707s Jan 4 14:05:36.090: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 14.158328875s Jan 4 14:05:36.090: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 4 14:05:36.090: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jan 4 14:05:36.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1689' Jan 4 14:05:36.229: INFO: stderr: "" Jan 4 14:05:36.230: INFO: stdout: "I0104 14:05:30.675990 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/xjgp 580\nI0104 14:05:30.892558 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/mlrk 596\nI0104 14:05:31.076310 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/6pd 420\nI0104 14:05:31.276580 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/48d 269\nI0104 14:05:31.477319 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/nrj 558\nI0104 14:05:31.676967 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/7hd 497\nI0104 14:05:31.885792 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/kcj 258\nI0104 14:05:32.077177 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/6xnk 344\nI0104 14:05:32.276770 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/rnhn 489\nI0104 14:05:32.478314 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/mk99 462\nI0104 14:05:32.676710 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/mll 297\nI0104 14:05:32.876914 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/scq7 409\nI0104 14:05:33.076692 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/kxx 311\nI0104 14:05:33.276871 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/zfm 410\nI0104 14:05:33.487307 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/wvpf 524\nI0104 14:05:33.676619 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/vvk7 526\nI0104 14:05:33.876807 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/c9fn 421\nI0104 14:05:34.078058 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/h7h 302\nI0104 14:05:34.277059 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/ppnp 411\nI0104 14:05:34.477463 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/59q 379\nI0104 14:05:34.677047 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/757p 437\nI0104 14:05:34.879040 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/ckjz 337\nI0104 14:05:35.076840 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/ss89 318\nI0104 14:05:35.277842 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/xxd 509\nI0104 14:05:35.476577 1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/f5h 507\nI0104 14:05:35.676220 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/ns/pods/kgp 464\nI0104 14:05:35.876648 1 logs_generator.go:76] 26 GET /api/v1/namespaces/kube-system/pods/hh7 572\nI0104 14:05:36.077862 1 logs_generator.go:76] 27 POST /api/v1/namespaces/default/pods/kdgm 210\n" STEP: limiting log lines Jan 4 14:05:36.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1689 --tail=1' Jan 4 14:05:36.380: INFO: stderr: "" Jan 4 14:05:36.380: INFO: stdout: "I0104 14:05:36.278111 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/dt5 209\n" Jan 4 14:05:36.380: INFO: got output "I0104 14:05:36.278111 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/dt5 209\n" STEP: limiting log bytes Jan 4 14:05:36.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1689 --limit-bytes=1' Jan 4 14:05:36.578: INFO: stderr: "" Jan 4 14:05:36.579: INFO: stdout: "I" Jan 4 14:05:36.579: INFO: got output "I" STEP: exposing timestamps Jan 4 14:05:36.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1689 --tail=1 --timestamps' Jan 4 14:05:36.752: INFO: stderr: "" Jan 4 14:05:36.752: INFO: stdout: "2020-01-04T14:05:36.681722803Z I0104 14:05:36.678547 1 logs_generator.go:76] 30 GET /api/v1/namespaces/ns/pods/89b 369\n" Jan 4 14:05:36.753: INFO: got output "2020-01-04T14:05:36.681722803Z I0104 14:05:36.678547 1 logs_generator.go:76] 30 GET /api/v1/namespaces/ns/pods/89b 369\n" STEP: restricting to a time range Jan 4 14:05:39.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1689 --since=1s' Jan 4 14:05:39.390: INFO: stderr: "" Jan 4 14:05:39.390: INFO: stdout: "I0104 14:05:38.477873 1 logs_generator.go:76] 39 GET /api/v1/namespaces/default/pods/twxk 382\nI0104 14:05:38.677251 1 logs_generator.go:76] 40 GET /api/v1/namespaces/default/pods/m56 491\nI0104 14:05:38.877072 1 logs_generator.go:76] 41 POST /api/v1/namespaces/default/pods/cng 382\nI0104 14:05:39.077816 1 logs_generator.go:76] 42 POST /api/v1/namespaces/ns/pods/pnw 298\nI0104 14:05:39.276901 1 logs_generator.go:76] 43 GET /api/v1/namespaces/kube-system/pods/8xr4 545\n" Jan 4 14:05:39.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1689 --since=24h' Jan 4 14:05:39.497: INFO: stderr: "" Jan 4 14:05:39.497: INFO: stdout: "I0104 14:05:30.675990 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/xjgp 580\nI0104 14:05:30.892558 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/mlrk 596\nI0104 14:05:31.076310 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/6pd 420\nI0104 14:05:31.276580 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/48d 269\nI0104 14:05:31.477319 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/nrj 558\nI0104 14:05:31.676967 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/7hd 497\nI0104 14:05:31.885792 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/kcj 258\nI0104 14:05:32.077177 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/6xnk 344\nI0104 14:05:32.276770 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/rnhn 489\nI0104 14:05:32.478314 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/mk99 462\nI0104 14:05:32.676710 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/mll 297\nI0104 14:05:32.876914 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/scq7 409\nI0104 14:05:33.076692 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/kxx 311\nI0104 14:05:33.276871 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/zfm 410\nI0104 14:05:33.487307 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/wvpf 524\nI0104 14:05:33.676619 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/vvk7 526\nI0104 14:05:33.876807 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/c9fn 421\nI0104 14:05:34.078058 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/h7h 302\nI0104 14:05:34.277059 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/ppnp 411\nI0104 14:05:34.477463 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/59q 379\nI0104 14:05:34.677047 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/757p 437\nI0104 14:05:34.879040 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/ckjz 337\nI0104 14:05:35.076840 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/ss89 318\nI0104 14:05:35.277842 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/xxd 509\nI0104 14:05:35.476577 1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/f5h 507\nI0104 14:05:35.676220 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/ns/pods/kgp 464\nI0104 14:05:35.876648 1 logs_generator.go:76] 26 GET /api/v1/namespaces/kube-system/pods/hh7 572\nI0104 14:05:36.077862 1 logs_generator.go:76] 27 POST /api/v1/namespaces/default/pods/kdgm 210\nI0104 14:05:36.278111 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/dt5 209\nI0104 14:05:36.478553 1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/nmb 395\nI0104 14:05:36.678547 1 logs_generator.go:76] 30 GET /api/v1/namespaces/ns/pods/89b 369\nI0104 14:05:36.877847 1 logs_generator.go:76] 31 GET /api/v1/namespaces/default/pods/hdxt 468\nI0104 14:05:37.077894 1 logs_generator.go:76] 32 GET /api/v1/namespaces/kube-system/pods/zcl 455\nI0104 14:05:37.280901 1 logs_generator.go:76] 33 GET /api/v1/namespaces/default/pods/bk2 256\nI0104 14:05:37.476706 1 logs_generator.go:76] 34 POST /api/v1/namespaces/ns/pods/sn9c 443\nI0104 14:05:37.678412 1 logs_generator.go:76] 35 PUT /api/v1/namespaces/ns/pods/sl6t 435\nI0104 14:05:37.876911 1 logs_generator.go:76] 36 GET /api/v1/namespaces/default/pods/lqvl 404\nI0104 14:05:38.078899 1 logs_generator.go:76] 37 PUT /api/v1/namespaces/default/pods/99t 521\nI0104 14:05:38.277229 1 logs_generator.go:76] 38 GET /api/v1/namespaces/kube-system/pods/mm4j 366\nI0104 14:05:38.477873 1 logs_generator.go:76] 39 GET /api/v1/namespaces/default/pods/twxk 382\nI0104 14:05:38.677251 1 logs_generator.go:76] 40 GET /api/v1/namespaces/default/pods/m56 491\nI0104 14:05:38.877072 1 logs_generator.go:76] 41 POST /api/v1/namespaces/default/pods/cng 382\nI0104 14:05:39.077816 1 logs_generator.go:76] 42 POST /api/v1/namespaces/ns/pods/pnw 298\nI0104 14:05:39.276901 1 logs_generator.go:76] 43 GET /api/v1/namespaces/kube-system/pods/8xr4 545\nI0104 14:05:39.477202 1 logs_generator.go:76] 44 GET /api/v1/namespaces/kube-system/pods/2rrz 550\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 Jan 4 14:05:39.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-1689' Jan 4 14:05:46.612: INFO: stderr: "" Jan 4 14:05:46.612: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:05:46.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1689" for this suite. • [SLOW TEST:25.249 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1440 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":47,"skipped":966,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:05:46.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 4 14:05:46.723: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2672 /api/v1/namespaces/watch-2672/configmaps/e2e-watch-test-configmap-a f1276f1b-aab9-468b-ac13-06b1258af6d4 24497 0 2020-01-04 14:05:46 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 4 14:05:46.724: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2672 /api/v1/namespaces/watch-2672/configmaps/e2e-watch-test-configmap-a f1276f1b-aab9-468b-ac13-06b1258af6d4 24497 0 2020-01-04 14:05:46 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 4 14:05:56.735: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2672 /api/v1/namespaces/watch-2672/configmaps/e2e-watch-test-configmap-a f1276f1b-aab9-468b-ac13-06b1258af6d4 24530 0 2020-01-04 14:05:46 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 4 14:05:56.736: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2672 /api/v1/namespaces/watch-2672/configmaps/e2e-watch-test-configmap-a f1276f1b-aab9-468b-ac13-06b1258af6d4 24530 0 2020-01-04 14:05:46 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 4 14:06:06.749: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2672 /api/v1/namespaces/watch-2672/configmaps/e2e-watch-test-configmap-a f1276f1b-aab9-468b-ac13-06b1258af6d4 24554 0 2020-01-04 14:05:46 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 4 14:06:06.749: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2672 /api/v1/namespaces/watch-2672/configmaps/e2e-watch-test-configmap-a f1276f1b-aab9-468b-ac13-06b1258af6d4 24554 0 2020-01-04 14:05:46 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 4 14:06:16.762: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2672 /api/v1/namespaces/watch-2672/configmaps/e2e-watch-test-configmap-a f1276f1b-aab9-468b-ac13-06b1258af6d4 24578 0 2020-01-04 14:05:46 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 4 14:06:16.762: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2672 /api/v1/namespaces/watch-2672/configmaps/e2e-watch-test-configmap-a f1276f1b-aab9-468b-ac13-06b1258af6d4 24578 0 2020-01-04 14:05:46 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 4 14:06:26.784: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2672 /api/v1/namespaces/watch-2672/configmaps/e2e-watch-test-configmap-b 84a344ba-bf2a-4f00-9eb5-c45bbd6deac1 24602 0 2020-01-04 14:06:26 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 4 14:06:26.784: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2672 /api/v1/namespaces/watch-2672/configmaps/e2e-watch-test-configmap-b 84a344ba-bf2a-4f00-9eb5-c45bbd6deac1 24602 0 2020-01-04 14:06:26 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 4 14:06:37.166: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2672 /api/v1/namespaces/watch-2672/configmaps/e2e-watch-test-configmap-b 84a344ba-bf2a-4f00-9eb5-c45bbd6deac1 24624 0 2020-01-04 14:06:26 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 4 14:06:37.166: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2672 /api/v1/namespaces/watch-2672/configmaps/e2e-watch-test-configmap-b 84a344ba-bf2a-4f00-9eb5-c45bbd6deac1 24624 0 2020-01-04 14:06:26 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:06:47.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2672" for this suite. • [SLOW TEST:60.522 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":48,"skipped":982,"failed":0} S ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:06:47.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-f0b7aa9f-522f-447c-8bef-2acd0c51e0b9 STEP: Creating configMap with name cm-test-opt-upd-404012ed-1e02-4227-935a-4a906ff7cca7 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f0b7aa9f-522f-447c-8bef-2acd0c51e0b9 STEP: Updating configmap cm-test-opt-upd-404012ed-1e02-4227-935a-4a906ff7cca7 STEP: Creating configMap with name cm-test-opt-create-f1ce317d-82d3-43d6-a802-4b3862e3a899 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:07:03.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1879" for this suite. • [SLOW TEST:16.580 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":983,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:07:03.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362 STEP: creating the pod Jan 4 14:07:03.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8896' Jan 4 14:07:04.423: INFO: stderr: "" Jan 4 14:07:04.424: INFO: stdout: "pod/pause created\n" Jan 4 14:07:04.424: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 4 14:07:04.424: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8896" to be "running and ready" Jan 4 14:07:04.432: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.207139ms Jan 4 14:07:06.440: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016241473s Jan 4 14:07:08.449: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024865588s Jan 4 14:07:10.454: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03041771s Jan 4 14:07:12.460: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036108417s Jan 4 14:07:14.470: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.045969351s Jan 4 14:07:16.477: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.052580095s Jan 4 14:07:18.525: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 14.101032969s Jan 4 14:07:18.525: INFO: Pod "pause" satisfied condition "running and ready" Jan 4 14:07:18.525: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Jan 4 14:07:18.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8896' Jan 4 14:07:18.662: INFO: stderr: "" Jan 4 14:07:18.662: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 4 14:07:18.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8896' Jan 4 14:07:18.961: INFO: stderr: "" Jan 4 14:07:18.962: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 14s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 4 14:07:18.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8896' Jan 4 14:07:19.134: INFO: stderr: "" Jan 4 14:07:19.134: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 4 14:07:19.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8896' Jan 4 14:07:19.285: INFO: stderr: "" Jan 4 14:07:19.285: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 15s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1369 STEP: using delete to clean up resources Jan 4 14:07:19.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8896' Jan 4 14:07:19.436: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 14:07:19.436: INFO: stdout: "pod \"pause\" force deleted\n" Jan 4 14:07:19.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8896' Jan 4 14:07:19.552: INFO: stderr: "No resources found in kubectl-8896 namespace.\n" Jan 4 14:07:19.552: INFO: stdout: "" Jan 4 14:07:19.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8896 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 4 14:07:19.655: INFO: stderr: "" Jan 4 14:07:19.655: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:07:19.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8896" for this suite. • [SLOW TEST:15.903 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1359 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":50,"skipped":998,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:07:19.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-4c203ca1-a476-41c9-a008-aec37b8602d6 STEP: Creating a pod to test consume secrets Jan 4 14:07:19.852: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3f43db2c-2b24-49da-946a-3cdc107557b9" in namespace "projected-8807" to be "success or failure" Jan 4 14:07:19.856: INFO: Pod "pod-projected-secrets-3f43db2c-2b24-49da-946a-3cdc107557b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.396ms Jan 4 14:07:21.865: INFO: Pod "pod-projected-secrets-3f43db2c-2b24-49da-946a-3cdc107557b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012499684s Jan 4 14:07:23.876: INFO: Pod "pod-projected-secrets-3f43db2c-2b24-49da-946a-3cdc107557b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02425223s Jan 4 14:07:25.886: INFO: Pod "pod-projected-secrets-3f43db2c-2b24-49da-946a-3cdc107557b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034293688s Jan 4 14:07:27.892: INFO: Pod "pod-projected-secrets-3f43db2c-2b24-49da-946a-3cdc107557b9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040384415s Jan 4 14:07:29.899: INFO: Pod "pod-projected-secrets-3f43db2c-2b24-49da-946a-3cdc107557b9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.046529724s Jan 4 14:07:31.904: INFO: Pod "pod-projected-secrets-3f43db2c-2b24-49da-946a-3cdc107557b9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.05212442s Jan 4 14:07:33.912: INFO: Pod "pod-projected-secrets-3f43db2c-2b24-49da-946a-3cdc107557b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.059825828s STEP: Saw pod success Jan 4 14:07:33.912: INFO: Pod "pod-projected-secrets-3f43db2c-2b24-49da-946a-3cdc107557b9" satisfied condition "success or failure" Jan 4 14:07:33.918: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-3f43db2c-2b24-49da-946a-3cdc107557b9 container projected-secret-volume-test: STEP: delete the pod Jan 4 14:07:34.007: INFO: Waiting for pod pod-projected-secrets-3f43db2c-2b24-49da-946a-3cdc107557b9 to disappear Jan 4 14:07:34.035: INFO: Pod pod-projected-secrets-3f43db2c-2b24-49da-946a-3cdc107557b9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:07:34.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8807" for this suite. • [SLOW TEST:14.378 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":1013,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:07:34.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:07:34.126: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 4 14:07:39.134: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 4 14:07:43.146: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jan 4 14:07:43.214: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-2833 /apis/apps/v1/namespaces/deployment-2833/deployments/test-cleanup-deployment dc426ed0-bc01-4ac3-98ab-866808daefd6 24903 1 2020-01-04 14:07:43 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001848e18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jan 4 14:07:43.221: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-2833 /apis/apps/v1/namespaces/deployment-2833/replicasets/test-cleanup-deployment-55ffc6b7b6 decad2f9-0d55-428c-aae4-aca88e431903 24905 1 2020-01-04 14:07:43 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment dc426ed0-bc01-4ac3-98ab-866808daefd6 0xc001849217 0xc001849218}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001849288 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 4 14:07:43.221: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 4 14:07:43.221: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-2833 /apis/apps/v1/namespaces/deployment-2833/replicasets/test-cleanup-controller 45f7b78d-3d6c-4d35-945f-e0d92d75c212 24904 1 2020-01-04 14:07:34 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment dc426ed0-bc01-4ac3-98ab-866808daefd6 0xc001849147 0xc001849148}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0018491a8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 4 14:07:43.352: INFO: Pod "test-cleanup-controller-lns7s" is available: &Pod{ObjectMeta:{test-cleanup-controller-lns7s test-cleanup-controller- deployment-2833 /api/v1/namespaces/deployment-2833/pods/test-cleanup-controller-lns7s 26f7a95b-a4cf-4522-865c-cf7d9bf54fcf 24897 0 2020-01-04 14:07:34 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 45f7b78d-3d6c-4d35-945f-e0d92d75c212 0xc0018496d7 0xc0018496d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c4hg9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c4hg9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c4hg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:07:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:07:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:07:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:07:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-04 14:07:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-04 14:07:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://43f8c23f846515a8af6a9e8700c5a3d7635b4b3252cfdad0612315ac9e1d0e36,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:07:43.353: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-lpzpz" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-lpzpz test-cleanup-deployment-55ffc6b7b6- deployment-2833 /api/v1/namespaces/deployment-2833/pods/test-cleanup-deployment-55ffc6b7b6-lpzpz 5739dfff-b943-46d2-bad9-4dae0f681a78 24906 0 2020-01-04 14:07:43 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 decad2f9-0d55-428c-aae4-aca88e431903 0xc001849857 0xc001849858}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c4hg9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c4hg9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c4hg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:07:43.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2833" for this suite. • [SLOW TEST:9.344 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":52,"skipped":1033,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:07:43.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5470 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Jan 4 14:07:43.508: INFO: Found 0 stateful pods, waiting for 3 Jan 4 14:07:53.522: INFO: Found 1 stateful pods, waiting for 3 Jan 4 14:08:03.895: INFO: Found 2 stateful pods, waiting for 3 Jan 4 14:08:13.519: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 4 14:08:13.519: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 4 14:08:13.519: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 4 14:08:23.515: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 4 14:08:23.515: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 4 14:08:23.515: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 4 14:08:23.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5470 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 4 14:08:26.727: INFO: stderr: "I0104 14:08:26.456429 499 log.go:172] (0xc000106fd0) (0xc0007081e0) Create stream\nI0104 14:08:26.456490 499 log.go:172] (0xc000106fd0) (0xc0007081e0) Stream added, broadcasting: 1\nI0104 14:08:26.461130 499 log.go:172] (0xc000106fd0) Reply frame received for 1\nI0104 14:08:26.461160 499 log.go:172] (0xc000106fd0) (0xc000710000) Create stream\nI0104 14:08:26.461169 499 log.go:172] (0xc000106fd0) (0xc000710000) Stream added, broadcasting: 3\nI0104 14:08:26.462997 499 log.go:172] (0xc000106fd0) Reply frame received for 3\nI0104 14:08:26.463107 499 log.go:172] (0xc000106fd0) (0xc000740000) Create stream\nI0104 14:08:26.463130 499 log.go:172] (0xc000106fd0) (0xc000740000) Stream added, broadcasting: 5\nI0104 14:08:26.464741 499 log.go:172] (0xc000106fd0) Reply frame received for 5\nI0104 14:08:26.563476 499 log.go:172] (0xc000106fd0) Data frame received for 5\nI0104 14:08:26.563592 499 log.go:172] (0xc000740000) (5) Data frame handling\nI0104 14:08:26.563661 499 log.go:172] (0xc000740000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0104 14:08:26.590413 499 log.go:172] (0xc000106fd0) Data frame received for 3\nI0104 14:08:26.590666 499 log.go:172] (0xc000710000) (3) Data frame handling\nI0104 14:08:26.590721 499 log.go:172] (0xc000710000) (3) Data frame sent\nI0104 14:08:26.714843 499 log.go:172] (0xc000106fd0) Data frame received for 1\nI0104 14:08:26.715066 499 log.go:172] (0xc000106fd0) (0xc000710000) Stream removed, broadcasting: 3\nI0104 14:08:26.715094 499 log.go:172] (0xc0007081e0) (1) Data frame handling\nI0104 14:08:26.715104 499 log.go:172] (0xc0007081e0) (1) Data frame sent\nI0104 14:08:26.715142 499 log.go:172] (0xc000106fd0) (0xc000740000) Stream removed, broadcasting: 5\nI0104 14:08:26.715160 499 log.go:172] (0xc000106fd0) (0xc0007081e0) Stream removed, broadcasting: 1\nI0104 14:08:26.715171 499 log.go:172] (0xc000106fd0) Go away received\nI0104 14:08:26.716206 499 log.go:172] (0xc000106fd0) (0xc0007081e0) Stream removed, broadcasting: 1\nI0104 14:08:26.716339 499 log.go:172] (0xc000106fd0) (0xc000710000) Stream removed, broadcasting: 3\nI0104 14:08:26.716346 499 log.go:172] (0xc000106fd0) (0xc000740000) Stream removed, broadcasting: 5\n" Jan 4 14:08:26.727: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 4 14:08:26.727: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 4 14:08:36.794: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 4 14:08:46.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5470 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:08:47.174: INFO: stderr: "I0104 14:08:46.995212 524 log.go:172] (0xc000afb130) (0xc0005d1d60) Create stream\nI0104 14:08:46.995340 524 log.go:172] (0xc000afb130) (0xc0005d1d60) Stream added, broadcasting: 1\nI0104 14:08:46.998116 524 log.go:172] (0xc000afb130) Reply frame received for 1\nI0104 14:08:46.998145 524 log.go:172] (0xc000afb130) (0xc000ac6000) Create stream\nI0104 14:08:46.998152 524 log.go:172] (0xc000afb130) (0xc000ac6000) Stream added, broadcasting: 3\nI0104 14:08:46.999083 524 log.go:172] (0xc000afb130) Reply frame received for 3\nI0104 14:08:46.999105 524 log.go:172] (0xc000afb130) (0xc000ac6280) Create stream\nI0104 14:08:46.999111 524 log.go:172] (0xc000afb130) (0xc000ac6280) Stream added, broadcasting: 5\nI0104 14:08:47.000053 524 log.go:172] (0xc000afb130) Reply frame received for 5\nI0104 14:08:47.087094 524 log.go:172] (0xc000afb130) Data frame received for 3\nI0104 14:08:47.087229 524 log.go:172] (0xc000ac6000) (3) Data frame handling\nI0104 14:08:47.087254 524 log.go:172] (0xc000ac6000) (3) Data frame sent\nI0104 14:08:47.087307 524 log.go:172] (0xc000afb130) Data frame received for 5\nI0104 14:08:47.087332 524 log.go:172] (0xc000ac6280) (5) Data frame handling\nI0104 14:08:47.087342 524 log.go:172] (0xc000ac6280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0104 14:08:47.167432 524 log.go:172] (0xc000afb130) Data frame received for 1\nI0104 14:08:47.167481 524 log.go:172] (0xc000afb130) (0xc000ac6000) Stream removed, broadcasting: 3\nI0104 14:08:47.167514 524 log.go:172] (0xc0005d1d60) (1) Data frame handling\nI0104 14:08:47.167543 524 log.go:172] (0xc0005d1d60) (1) Data frame sent\nI0104 14:08:47.167557 524 log.go:172] (0xc000afb130) (0xc0005d1d60) Stream removed, broadcasting: 1\nI0104 14:08:47.167684 524 log.go:172] (0xc000afb130) (0xc000ac6280) Stream removed, broadcasting: 5\nI0104 14:08:47.167780 524 log.go:172] (0xc000afb130) Go away received\nI0104 14:08:47.167885 524 log.go:172] (0xc000afb130) (0xc0005d1d60) Stream removed, broadcasting: 1\nI0104 14:08:47.167902 524 log.go:172] (0xc000afb130) (0xc000ac6000) Stream removed, broadcasting: 3\nI0104 14:08:47.167911 524 log.go:172] (0xc000afb130) (0xc000ac6280) Stream removed, broadcasting: 5\n" Jan 4 14:08:47.174: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 4 14:08:47.174: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 4 14:08:57.207: INFO: Waiting for StatefulSet statefulset-5470/ss2 to complete update Jan 4 14:08:57.207: INFO: Waiting for Pod statefulset-5470/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 4 14:08:57.207: INFO: Waiting for Pod statefulset-5470/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 4 14:08:57.207: INFO: Waiting for Pod statefulset-5470/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 4 14:09:07.219: INFO: Waiting for StatefulSet statefulset-5470/ss2 to complete update Jan 4 14:09:07.219: INFO: Waiting for Pod statefulset-5470/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 4 14:09:07.219: INFO: Waiting for Pod statefulset-5470/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 4 14:09:17.217: INFO: Waiting for StatefulSet statefulset-5470/ss2 to complete update Jan 4 14:09:17.217: INFO: Waiting for Pod statefulset-5470/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 4 14:09:17.217: INFO: Waiting for Pod statefulset-5470/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 4 14:09:27.446: INFO: Waiting for StatefulSet statefulset-5470/ss2 to complete update Jan 4 14:09:27.446: INFO: Waiting for Pod statefulset-5470/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 4 14:09:27.446: INFO: Waiting for Pod statefulset-5470/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 4 14:09:37.223: INFO: Waiting for StatefulSet statefulset-5470/ss2 to complete update Jan 4 14:09:37.224: INFO: Waiting for Pod statefulset-5470/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 4 14:09:47.426: INFO: Waiting for StatefulSet statefulset-5470/ss2 to complete update Jan 4 14:09:47.426: INFO: Waiting for Pod statefulset-5470/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 4 14:09:57.219: INFO: Waiting for StatefulSet statefulset-5470/ss2 to complete update Jan 4 14:09:57.219: INFO: Waiting for Pod statefulset-5470/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 4 14:10:07.221: INFO: Waiting for StatefulSet statefulset-5470/ss2 to complete update STEP: Rolling back to a previous revision Jan 4 14:10:17.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5470 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 4 14:10:17.630: INFO: stderr: "I0104 14:10:17.403553 543 log.go:172] (0xc000904000) (0xc000974000) Create stream\nI0104 14:10:17.403691 543 log.go:172] (0xc000904000) (0xc000974000) Stream added, broadcasting: 1\nI0104 14:10:17.409775 543 log.go:172] (0xc000904000) Reply frame received for 1\nI0104 14:10:17.409812 543 log.go:172] (0xc000904000) (0xc0005fbea0) Create stream\nI0104 14:10:17.409826 543 log.go:172] (0xc000904000) (0xc0005fbea0) Stream added, broadcasting: 3\nI0104 14:10:17.412408 543 log.go:172] (0xc000904000) Reply frame received for 3\nI0104 14:10:17.412466 543 log.go:172] (0xc000904000) (0xc000906000) Create stream\nI0104 14:10:17.412491 543 log.go:172] (0xc000904000) (0xc000906000) Stream added, broadcasting: 5\nI0104 14:10:17.414424 543 log.go:172] (0xc000904000) Reply frame received for 5\nI0104 14:10:17.488498 543 log.go:172] (0xc000904000) Data frame received for 5\nI0104 14:10:17.488528 543 log.go:172] (0xc000906000) (5) Data frame handling\nI0104 14:10:17.488542 543 log.go:172] (0xc000906000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0104 14:10:17.545701 543 log.go:172] (0xc000904000) Data frame received for 3\nI0104 14:10:17.545733 543 log.go:172] (0xc0005fbea0) (3) Data frame handling\nI0104 14:10:17.545771 543 log.go:172] (0xc0005fbea0) (3) Data frame sent\nI0104 14:10:17.624233 543 log.go:172] (0xc000904000) Data frame received for 1\nI0104 14:10:17.624299 543 log.go:172] (0xc000974000) (1) Data frame handling\nI0104 14:10:17.624327 543 log.go:172] (0xc000974000) (1) Data frame sent\nI0104 14:10:17.624365 543 log.go:172] (0xc000904000) (0xc000974000) Stream removed, broadcasting: 1\nI0104 14:10:17.624495 543 log.go:172] (0xc000904000) (0xc0005fbea0) Stream removed, broadcasting: 3\nI0104 14:10:17.624601 543 log.go:172] (0xc000904000) (0xc000906000) Stream removed, broadcasting: 5\nI0104 14:10:17.624645 543 log.go:172] (0xc000904000) Go away received\nI0104 14:10:17.624852 543 log.go:172] (0xc000904000) (0xc000974000) Stream removed, broadcasting: 1\nI0104 14:10:17.624883 543 log.go:172] (0xc000904000) (0xc0005fbea0) Stream removed, broadcasting: 3\nI0104 14:10:17.624899 543 log.go:172] (0xc000904000) (0xc000906000) Stream removed, broadcasting: 5\n" Jan 4 14:10:17.630: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 4 14:10:17.631: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 4 14:10:27.673: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 4 14:10:37.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5470 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:10:38.096: INFO: stderr: "I0104 14:10:37.916915 559 log.go:172] (0xc0009b2fd0) (0xc000a38500) Create stream\nI0104 14:10:37.917218 559 log.go:172] (0xc0009b2fd0) (0xc000a38500) Stream added, broadcasting: 1\nI0104 14:10:37.931847 559 log.go:172] (0xc0009b2fd0) Reply frame received for 1\nI0104 14:10:37.931976 559 log.go:172] (0xc0009b2fd0) (0xc0009a2000) Create stream\nI0104 14:10:37.932000 559 log.go:172] (0xc0009b2fd0) (0xc0009a2000) Stream added, broadcasting: 3\nI0104 14:10:37.934911 559 log.go:172] (0xc0009b2fd0) Reply frame received for 3\nI0104 14:10:37.934949 559 log.go:172] (0xc0009b2fd0) (0xc0008ac000) Create stream\nI0104 14:10:37.934967 559 log.go:172] (0xc0009b2fd0) (0xc0008ac000) Stream added, broadcasting: 5\nI0104 14:10:37.937337 559 log.go:172] (0xc0009b2fd0) Reply frame received for 5\nI0104 14:10:38.020229 559 log.go:172] (0xc0009b2fd0) Data frame received for 3\nI0104 14:10:38.020304 559 log.go:172] (0xc0009a2000) (3) Data frame handling\nI0104 14:10:38.020324 559 log.go:172] (0xc0009a2000) (3) Data frame sent\nI0104 14:10:38.020400 559 log.go:172] (0xc0009b2fd0) Data frame received for 5\nI0104 14:10:38.020414 559 log.go:172] (0xc0008ac000) (5) Data frame handling\nI0104 14:10:38.020434 559 log.go:172] (0xc0008ac000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0104 14:10:38.092294 559 log.go:172] (0xc0009b2fd0) Data frame received for 1\nI0104 14:10:38.092401 559 log.go:172] (0xc0009b2fd0) (0xc0009a2000) Stream removed, broadcasting: 3\nI0104 14:10:38.092444 559 log.go:172] (0xc000a38500) (1) Data frame handling\nI0104 14:10:38.092465 559 log.go:172] (0xc000a38500) (1) Data frame sent\nI0104 14:10:38.092489 559 log.go:172] (0xc0009b2fd0) (0xc0008ac000) Stream removed, broadcasting: 5\nI0104 14:10:38.092516 559 log.go:172] (0xc0009b2fd0) (0xc000a38500) Stream removed, broadcasting: 1\nI0104 14:10:38.092567 559 log.go:172] (0xc0009b2fd0) Go away received\nI0104 14:10:38.092952 559 log.go:172] (0xc0009b2fd0) (0xc000a38500) Stream removed, broadcasting: 1\nI0104 14:10:38.092962 559 log.go:172] (0xc0009b2fd0) (0xc0009a2000) Stream removed, broadcasting: 3\nI0104 14:10:38.092966 559 log.go:172] (0xc0009b2fd0) (0xc0008ac000) Stream removed, broadcasting: 5\n" Jan 4 14:10:38.097: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 4 14:10:38.097: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 4 14:10:48.127: INFO: Waiting for StatefulSet statefulset-5470/ss2 to complete update Jan 4 14:10:48.127: INFO: Waiting for Pod statefulset-5470/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 4 14:10:48.127: INFO: Waiting for Pod statefulset-5470/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 4 14:10:48.127: INFO: Waiting for Pod statefulset-5470/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 4 14:10:58.143: INFO: Waiting for StatefulSet statefulset-5470/ss2 to complete update Jan 4 14:10:58.143: INFO: Waiting for Pod statefulset-5470/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 4 14:10:58.143: INFO: Waiting for Pod statefulset-5470/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 4 14:11:08.136: INFO: Waiting for StatefulSet statefulset-5470/ss2 to complete update Jan 4 14:11:08.136: INFO: Waiting for Pod statefulset-5470/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 4 14:11:08.136: INFO: Waiting for Pod statefulset-5470/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 4 14:11:18.166: INFO: Waiting for StatefulSet statefulset-5470/ss2 to complete update Jan 4 14:11:18.166: INFO: Waiting for Pod statefulset-5470/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 4 14:11:28.140: INFO: Waiting for StatefulSet statefulset-5470/ss2 to complete update Jan 4 14:11:28.140: INFO: Waiting for Pod statefulset-5470/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 4 14:11:38.144: INFO: Waiting for StatefulSet statefulset-5470/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 4 14:11:48.141: INFO: Deleting all statefulset in ns statefulset-5470 Jan 4 14:11:48.147: INFO: Scaling statefulset ss2 to 0 Jan 4 14:12:28.200: INFO: Waiting for statefulset status.replicas updated to 0 Jan 4 14:12:28.205: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:12:28.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5470" for this suite. • [SLOW TEST:284.860 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":53,"skipped":1050,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:12:28.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-081738ef-7ebe-4d4f-9424-518228b12a35 STEP: Creating a pod to test consume configMaps Jan 4 14:12:28.348: INFO: Waiting up to 5m0s for pod "pod-configmaps-df8c6d7f-77a1-4393-9cf2-e2fdbca2f4af" in namespace "configmap-9215" to be "success or failure" Jan 4 14:12:28.367: INFO: Pod "pod-configmaps-df8c6d7f-77a1-4393-9cf2-e2fdbca2f4af": Phase="Pending", Reason="", readiness=false. Elapsed: 19.556495ms Jan 4 14:12:30.375: INFO: Pod "pod-configmaps-df8c6d7f-77a1-4393-9cf2-e2fdbca2f4af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02754952s Jan 4 14:12:32.382: INFO: Pod "pod-configmaps-df8c6d7f-77a1-4393-9cf2-e2fdbca2f4af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034206979s Jan 4 14:12:34.407: INFO: Pod "pod-configmaps-df8c6d7f-77a1-4393-9cf2-e2fdbca2f4af": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059407391s Jan 4 14:12:36.413: INFO: Pod "pod-configmaps-df8c6d7f-77a1-4393-9cf2-e2fdbca2f4af": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064665221s Jan 4 14:12:38.417: INFO: Pod "pod-configmaps-df8c6d7f-77a1-4393-9cf2-e2fdbca2f4af": Phase="Pending", Reason="", readiness=false. Elapsed: 10.069333123s Jan 4 14:12:40.429: INFO: Pod "pod-configmaps-df8c6d7f-77a1-4393-9cf2-e2fdbca2f4af": Phase="Pending", Reason="", readiness=false. Elapsed: 12.080800005s Jan 4 14:12:42.432: INFO: Pod "pod-configmaps-df8c6d7f-77a1-4393-9cf2-e2fdbca2f4af": Phase="Pending", Reason="", readiness=false. Elapsed: 14.084384494s Jan 4 14:12:44.439: INFO: Pod "pod-configmaps-df8c6d7f-77a1-4393-9cf2-e2fdbca2f4af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.091588725s STEP: Saw pod success Jan 4 14:12:44.440: INFO: Pod "pod-configmaps-df8c6d7f-77a1-4393-9cf2-e2fdbca2f4af" satisfied condition "success or failure" Jan 4 14:12:44.444: INFO: Trying to get logs from node jerma-node pod pod-configmaps-df8c6d7f-77a1-4393-9cf2-e2fdbca2f4af container configmap-volume-test: STEP: delete the pod Jan 4 14:12:44.555: INFO: Waiting for pod pod-configmaps-df8c6d7f-77a1-4393-9cf2-e2fdbca2f4af to disappear Jan 4 14:12:44.562: INFO: Pod pod-configmaps-df8c6d7f-77a1-4393-9cf2-e2fdbca2f4af no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:12:44.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9215" for this suite. • [SLOW TEST:16.325 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":1057,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:12:44.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:12:44.885: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-f0c7be28-3b4a-4352-b619-521c9554c10a" in namespace "security-context-test-958" to be "success or failure" Jan 4 14:12:44.893: INFO: Pod "busybox-readonly-false-f0c7be28-3b4a-4352-b619-521c9554c10a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.209968ms Jan 4 14:12:46.900: INFO: Pod "busybox-readonly-false-f0c7be28-3b4a-4352-b619-521c9554c10a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015063157s Jan 4 14:12:48.911: INFO: Pod "busybox-readonly-false-f0c7be28-3b4a-4352-b619-521c9554c10a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025832344s Jan 4 14:12:50.920: INFO: Pod "busybox-readonly-false-f0c7be28-3b4a-4352-b619-521c9554c10a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034918126s Jan 4 14:12:52.928: INFO: Pod "busybox-readonly-false-f0c7be28-3b4a-4352-b619-521c9554c10a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042273265s Jan 4 14:12:54.933: INFO: Pod "busybox-readonly-false-f0c7be28-3b4a-4352-b619-521c9554c10a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.048159307s Jan 4 14:12:56.941: INFO: Pod "busybox-readonly-false-f0c7be28-3b4a-4352-b619-521c9554c10a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.056063982s Jan 4 14:12:56.941: INFO: Pod "busybox-readonly-false-f0c7be28-3b4a-4352-b619-521c9554c10a" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:12:56.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-958" for this suite. • [SLOW TEST:12.375 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":1091,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:12:56.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 4 14:13:21.246: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 14:13:21.309: INFO: Pod pod-with-poststart-http-hook still exists Jan 4 14:13:23.310: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 14:13:23.320: INFO: Pod pod-with-poststart-http-hook still exists Jan 4 14:13:25.310: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 14:13:25.319: INFO: Pod pod-with-poststart-http-hook still exists Jan 4 14:13:27.310: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 14:13:27.320: INFO: Pod pod-with-poststart-http-hook still exists Jan 4 14:13:29.310: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 14:13:29.316: INFO: Pod pod-with-poststart-http-hook still exists Jan 4 14:13:31.310: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 14:13:31.316: INFO: Pod pod-with-poststart-http-hook still exists Jan 4 14:13:33.310: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 14:13:33.317: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:13:33.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8220" for this suite. • [SLOW TEST:36.378 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":1112,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:13:33.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-c98f5324-1c32-428f-9ff2-e556b81fc9fd STEP: Creating a pod to test consume configMaps Jan 4 14:13:33.520: INFO: Waiting up to 5m0s for pod "pod-configmaps-dbd263a1-3673-4ff8-a1ee-18ab4cc266b9" in namespace "configmap-3148" to be "success or failure" Jan 4 14:13:33.532: INFO: Pod "pod-configmaps-dbd263a1-3673-4ff8-a1ee-18ab4cc266b9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.390063ms Jan 4 14:13:35.536: INFO: Pod "pod-configmaps-dbd263a1-3673-4ff8-a1ee-18ab4cc266b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016057374s Jan 4 14:13:37.545: INFO: Pod "pod-configmaps-dbd263a1-3673-4ff8-a1ee-18ab4cc266b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025440159s Jan 4 14:13:39.552: INFO: Pod "pod-configmaps-dbd263a1-3673-4ff8-a1ee-18ab4cc266b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03231847s Jan 4 14:13:41.558: INFO: Pod "pod-configmaps-dbd263a1-3673-4ff8-a1ee-18ab4cc266b9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038448964s Jan 4 14:13:43.563: INFO: Pod "pod-configmaps-dbd263a1-3673-4ff8-a1ee-18ab4cc266b9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.042961177s Jan 4 14:13:45.570: INFO: Pod "pod-configmaps-dbd263a1-3673-4ff8-a1ee-18ab4cc266b9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.050356102s Jan 4 14:13:47.576: INFO: Pod "pod-configmaps-dbd263a1-3673-4ff8-a1ee-18ab4cc266b9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.056306214s Jan 4 14:13:49.582: INFO: Pod "pod-configmaps-dbd263a1-3673-4ff8-a1ee-18ab4cc266b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.062379283s STEP: Saw pod success Jan 4 14:13:49.583: INFO: Pod "pod-configmaps-dbd263a1-3673-4ff8-a1ee-18ab4cc266b9" satisfied condition "success or failure" Jan 4 14:13:49.587: INFO: Trying to get logs from node jerma-node pod pod-configmaps-dbd263a1-3673-4ff8-a1ee-18ab4cc266b9 container configmap-volume-test: STEP: delete the pod Jan 4 14:13:49.649: INFO: Waiting for pod pod-configmaps-dbd263a1-3673-4ff8-a1ee-18ab4cc266b9 to disappear Jan 4 14:13:49.652: INFO: Pod pod-configmaps-dbd263a1-3673-4ff8-a1ee-18ab4cc266b9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:13:49.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3148" for this suite. • [SLOW TEST:16.344 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":1118,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:13:49.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jan 4 14:13:49.773: INFO: PodSpec: initContainers in spec.initContainers Jan 4 14:14:55.336: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c64d954e-c246-4205-8150-e6d6c26ab718", GenerateName:"", Namespace:"init-container-5028", SelfLink:"/api/v1/namespaces/init-container-5028/pods/pod-init-c64d954e-c246-4205-8150-e6d6c26ab718", UID:"7b0a2ed9-e9d9-4fca-b700-63d7cb46c15b", ResourceVersion:"26433", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713744029, loc:(*time.Location)(0x7d100a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"773551056"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-rvjnp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002dc0040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rvjnp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rvjnp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rvjnp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00058a1c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002b4a060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00058a2f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00058a310)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00058a318), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00058a31c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744030, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744030, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744030, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744029, loc:(*time.Location)(0x7d100a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc0029960c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002a60070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002a600e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://11f5b808d4fc60d6ab5800124f5992ee95dab471d27562a69b8b6be4cb17e8d6", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002996100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0029960e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00058a3bf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:14:55.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5028" for this suite. • [SLOW TEST:65.683 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":58,"skipped":1130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:14:55.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Jan 4 14:14:55.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jan 4 14:14:55.645: INFO: stderr: "" Jan 4 14:14:55.645: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:14:55.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6152" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":59,"skipped":1195,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:14:55.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0104 14:15:06.801791 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 4 14:15:06.801: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:15:06.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3007" for this suite. • [SLOW TEST:11.156 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":60,"skipped":1201,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:15:06.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 4 14:15:11.697: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 4 14:15:15.597: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:15:17.654: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:15:19.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:15:21.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:15:23.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:15:25.603: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:15:27.605: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:15:29.622: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:15:31.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:15:33.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:15:35.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:15:37.603: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:15:39.605: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744112, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744111, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 4 14:15:42.625: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jan 4 14:15:42.691: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:15:42.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7147" for this suite. STEP: Destroying namespace "webhook-7147-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:36.192 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":61,"skipped":1214,"failed":0} [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:15:43.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 4 14:15:44.021: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 4 14:15:46.039: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744143, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:15:48.044: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744143, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:15:50.043: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744143, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:15:52.045: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744143, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:15:54.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744143, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:15:56.048: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744143, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:15:58.045: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744144, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744143, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 4 14:16:01.110: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:16:01.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:16:02.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6889" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:19.763 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":62,"skipped":1214,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:16:02.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jan 4 14:16:03.215: INFO: >>> kubeConfig: /root/.kube/config Jan 4 14:16:05.819: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:16:17.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-885" for this suite. • [SLOW TEST:14.536 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":63,"skipped":1254,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:16:17.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:16:17.490: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:16:24.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4563" for this suite. • [SLOW TEST:7.127 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":64,"skipped":1261,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:16:24.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6568 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 4 14:16:24.621: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 4 14:17:15.027: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6568 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 14:17:15.027: INFO: >>> kubeConfig: /root/.kube/config I0104 14:17:15.103232 9 log.go:172] (0xc0027d2bb0) (0xc0017d0320) Create stream I0104 14:17:15.103386 9 log.go:172] (0xc0027d2bb0) (0xc0017d0320) Stream added, broadcasting: 1 I0104 14:17:15.114154 9 log.go:172] (0xc0027d2bb0) Reply frame received for 1 I0104 14:17:15.114215 9 log.go:172] (0xc0027d2bb0) (0xc001422be0) Create stream I0104 14:17:15.114224 9 log.go:172] (0xc0027d2bb0) (0xc001422be0) Stream added, broadcasting: 3 I0104 14:17:15.116018 9 log.go:172] (0xc0027d2bb0) Reply frame received for 3 I0104 14:17:15.116038 9 log.go:172] (0xc0027d2bb0) (0xc0017d03c0) Create stream I0104 14:17:15.116045 9 log.go:172] (0xc0027d2bb0) (0xc0017d03c0) Stream added, broadcasting: 5 I0104 14:17:15.117894 9 log.go:172] (0xc0027d2bb0) Reply frame received for 5 I0104 14:17:16.232982 9 log.go:172] (0xc0027d2bb0) Data frame received for 3 I0104 14:17:16.233026 9 log.go:172] (0xc001422be0) (3) Data frame handling I0104 14:17:16.233040 9 log.go:172] (0xc001422be0) (3) Data frame sent I0104 14:17:16.604561 9 log.go:172] (0xc0027d2bb0) (0xc001422be0) Stream removed, broadcasting: 3 I0104 14:17:16.604920 9 log.go:172] (0xc0027d2bb0) Data frame received for 1 I0104 14:17:16.604963 9 log.go:172] (0xc0017d0320) (1) Data frame handling I0104 14:17:16.605117 9 log.go:172] (0xc0017d0320) (1) Data frame sent I0104 14:17:16.605161 9 log.go:172] (0xc0027d2bb0) (0xc0017d03c0) Stream removed, broadcasting: 5 I0104 14:17:16.605414 9 log.go:172] (0xc0027d2bb0) (0xc0017d0320) Stream removed, broadcasting: 1 I0104 14:17:16.605446 9 log.go:172] (0xc0027d2bb0) Go away received I0104 14:17:16.605574 9 log.go:172] (0xc0027d2bb0) (0xc0017d0320) Stream removed, broadcasting: 1 I0104 14:17:16.605593 9 log.go:172] (0xc0027d2bb0) (0xc001422be0) Stream removed, broadcasting: 3 I0104 14:17:16.605603 9 log.go:172] (0xc0027d2bb0) (0xc0017d03c0) Stream removed, broadcasting: 5 Jan 4 14:17:16.605: INFO: Found all expected endpoints: [netserver-0] Jan 4 14:17:16.671: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6568 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 14:17:16.671: INFO: >>> kubeConfig: /root/.kube/config I0104 14:17:16.730040 9 log.go:172] (0xc002b21550) (0xc0002d7720) Create stream I0104 14:17:16.730109 9 log.go:172] (0xc002b21550) (0xc0002d7720) Stream added, broadcasting: 1 I0104 14:17:16.744307 9 log.go:172] (0xc002b21550) Reply frame received for 1 I0104 14:17:16.744364 9 log.go:172] (0xc002b21550) (0xc001422dc0) Create stream I0104 14:17:16.744371 9 log.go:172] (0xc002b21550) (0xc001422dc0) Stream added, broadcasting: 3 I0104 14:17:16.747132 9 log.go:172] (0xc002b21550) Reply frame received for 3 I0104 14:17:16.747270 9 log.go:172] (0xc002b21550) (0xc0012660a0) Create stream I0104 14:17:16.747312 9 log.go:172] (0xc002b21550) (0xc0012660a0) Stream added, broadcasting: 5 I0104 14:17:16.749207 9 log.go:172] (0xc002b21550) Reply frame received for 5 I0104 14:17:17.862793 9 log.go:172] (0xc002b21550) Data frame received for 3 I0104 14:17:17.863045 9 log.go:172] (0xc001422dc0) (3) Data frame handling I0104 14:17:17.863088 9 log.go:172] (0xc001422dc0) (3) Data frame sent I0104 14:17:18.055833 9 log.go:172] (0xc002b21550) (0xc0012660a0) Stream removed, broadcasting: 5 I0104 14:17:18.056110 9 log.go:172] (0xc002b21550) Data frame received for 1 I0104 14:17:18.056154 9 log.go:172] (0xc0002d7720) (1) Data frame handling I0104 14:17:18.056184 9 log.go:172] (0xc0002d7720) (1) Data frame sent I0104 14:17:18.056244 9 log.go:172] (0xc002b21550) (0xc001422dc0) Stream removed, broadcasting: 3 I0104 14:17:18.056490 9 log.go:172] (0xc002b21550) (0xc0002d7720) Stream removed, broadcasting: 1 I0104 14:17:18.056570 9 log.go:172] (0xc002b21550) Go away received I0104 14:17:18.056701 9 log.go:172] (0xc002b21550) (0xc0002d7720) Stream removed, broadcasting: 1 I0104 14:17:18.056725 9 log.go:172] (0xc002b21550) (0xc001422dc0) Stream removed, broadcasting: 3 I0104 14:17:18.056731 9 log.go:172] (0xc002b21550) (0xc0012660a0) Stream removed, broadcasting: 5 Jan 4 14:17:18.056: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:17:18.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6568" for this suite. • [SLOW TEST:53.628 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1275,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:17:18.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-31dbee28-31af-42b0-92f7-8009149c7fe6 STEP: Creating a pod to test consume secrets Jan 4 14:17:18.417: INFO: Waiting up to 5m0s for pod "pod-secrets-f8a664f8-ca56-4073-9de2-bc1805f04bb8" in namespace "secrets-4434" to be "success or failure" Jan 4 14:17:18.493: INFO: Pod "pod-secrets-f8a664f8-ca56-4073-9de2-bc1805f04bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 75.635116ms Jan 4 14:17:20.501: INFO: Pod "pod-secrets-f8a664f8-ca56-4073-9de2-bc1805f04bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084013253s Jan 4 14:17:22.509: INFO: Pod "pod-secrets-f8a664f8-ca56-4073-9de2-bc1805f04bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091502811s Jan 4 14:17:25.217: INFO: Pod "pod-secrets-f8a664f8-ca56-4073-9de2-bc1805f04bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.799993821s Jan 4 14:17:27.600: INFO: Pod "pod-secrets-f8a664f8-ca56-4073-9de2-bc1805f04bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.182865869s Jan 4 14:17:29.606: INFO: Pod "pod-secrets-f8a664f8-ca56-4073-9de2-bc1805f04bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.188582257s Jan 4 14:17:31.612: INFO: Pod "pod-secrets-f8a664f8-ca56-4073-9de2-bc1805f04bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.194676989s Jan 4 14:17:33.623: INFO: Pod "pod-secrets-f8a664f8-ca56-4073-9de2-bc1805f04bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.205576634s Jan 4 14:17:35.631: INFO: Pod "pod-secrets-f8a664f8-ca56-4073-9de2-bc1805f04bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.21425973s Jan 4 14:17:37.642: INFO: Pod "pod-secrets-f8a664f8-ca56-4073-9de2-bc1805f04bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.224911272s Jan 4 14:17:39.651: INFO: Pod "pod-secrets-f8a664f8-ca56-4073-9de2-bc1805f04bb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.233879852s STEP: Saw pod success Jan 4 14:17:39.651: INFO: Pod "pod-secrets-f8a664f8-ca56-4073-9de2-bc1805f04bb8" satisfied condition "success or failure" Jan 4 14:17:39.657: INFO: Trying to get logs from node jerma-node pod pod-secrets-f8a664f8-ca56-4073-9de2-bc1805f04bb8 container secret-env-test: STEP: delete the pod Jan 4 14:17:39.813: INFO: Waiting for pod pod-secrets-f8a664f8-ca56-4073-9de2-bc1805f04bb8 to disappear Jan 4 14:17:39.825: INFO: Pod pod-secrets-f8a664f8-ca56-4073-9de2-bc1805f04bb8 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:17:39.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4434" for this suite. • [SLOW TEST:21.775 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1294,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:17:39.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 4 14:17:40.466: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 4 14:17:42.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:17:44.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:17:46.496: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:17:48.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:17:50.496: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744260, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 4 14:17:53.557: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:17:53.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8798-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:17:54.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8759" for this suite. STEP: Destroying namespace "webhook-8759-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.159 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":67,"skipped":1298,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:17:55.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:18:07.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-964" for this suite. • [SLOW TEST:12.182 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1311,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:18:07.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Jan 4 14:18:07.344: INFO: Waiting up to 5m0s for pod "client-containers-d23c4a2b-e330-4491-85c7-f4f2f93b73f6" in namespace "containers-2270" to be "success or failure" Jan 4 14:18:07.356: INFO: Pod "client-containers-d23c4a2b-e330-4491-85c7-f4f2f93b73f6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.793578ms Jan 4 14:18:09.361: INFO: Pod "client-containers-d23c4a2b-e330-4491-85c7-f4f2f93b73f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016959496s Jan 4 14:18:11.367: INFO: Pod "client-containers-d23c4a2b-e330-4491-85c7-f4f2f93b73f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022839597s Jan 4 14:18:13.376: INFO: Pod "client-containers-d23c4a2b-e330-4491-85c7-f4f2f93b73f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031905388s Jan 4 14:18:15.385: INFO: Pod "client-containers-d23c4a2b-e330-4491-85c7-f4f2f93b73f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040551468s Jan 4 14:18:17.426: INFO: Pod "client-containers-d23c4a2b-e330-4491-85c7-f4f2f93b73f6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.081971275s Jan 4 14:18:19.437: INFO: Pod "client-containers-d23c4a2b-e330-4491-85c7-f4f2f93b73f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.092248751s STEP: Saw pod success Jan 4 14:18:19.437: INFO: Pod "client-containers-d23c4a2b-e330-4491-85c7-f4f2f93b73f6" satisfied condition "success or failure" Jan 4 14:18:19.444: INFO: Trying to get logs from node jerma-node pod client-containers-d23c4a2b-e330-4491-85c7-f4f2f93b73f6 container test-container: STEP: delete the pod Jan 4 14:18:19.509: INFO: Waiting for pod client-containers-d23c4a2b-e330-4491-85c7-f4f2f93b73f6 to disappear Jan 4 14:18:19.520: INFO: Pod client-containers-d23c4a2b-e330-4491-85c7-f4f2f93b73f6 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:18:19.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2270" for this suite. • [SLOW TEST:12.345 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1332,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:18:19.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6526 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-6526 I0104 14:18:19.719038 9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6526, replica count: 2 I0104 14:18:22.769564 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 14:18:25.769962 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 14:18:28.770305 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 14:18:31.770684 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 4 14:18:31.770: INFO: Creating new exec pod Jan 4 14:18:40.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6526 execpodbrqrh -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 4 14:18:43.522: INFO: stderr: "I0104 14:18:43.179107 588 log.go:172] (0xc0007908f0) (0xc0007c7ea0) Create stream\nI0104 14:18:43.179183 588 log.go:172] (0xc0007908f0) (0xc0007c7ea0) Stream added, broadcasting: 1\nI0104 14:18:43.190154 588 log.go:172] (0xc0007908f0) Reply frame received for 1\nI0104 14:18:43.190211 588 log.go:172] (0xc0007908f0) (0xc000678780) Create stream\nI0104 14:18:43.190221 588 log.go:172] (0xc0007908f0) (0xc000678780) Stream added, broadcasting: 3\nI0104 14:18:43.191674 588 log.go:172] (0xc0007908f0) Reply frame received for 3\nI0104 14:18:43.191699 588 log.go:172] (0xc0007908f0) (0xc0004d15e0) Create stream\nI0104 14:18:43.191712 588 log.go:172] (0xc0007908f0) (0xc0004d15e0) Stream added, broadcasting: 5\nI0104 14:18:43.197122 588 log.go:172] (0xc0007908f0) Reply frame received for 5\nI0104 14:18:43.361845 588 log.go:172] (0xc0007908f0) Data frame received for 5\nI0104 14:18:43.361925 588 log.go:172] (0xc0004d15e0) (5) Data frame handling\nI0104 14:18:43.361959 588 log.go:172] (0xc0004d15e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0104 14:18:43.383146 588 log.go:172] (0xc0007908f0) Data frame received for 5\nI0104 14:18:43.383178 588 log.go:172] (0xc0004d15e0) (5) Data frame handling\nI0104 14:18:43.383198 588 log.go:172] (0xc0004d15e0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0104 14:18:43.516204 588 log.go:172] (0xc0007908f0) (0xc000678780) Stream removed, broadcasting: 3\nI0104 14:18:43.516323 588 log.go:172] (0xc0007908f0) Data frame received for 1\nI0104 14:18:43.516351 588 log.go:172] (0xc0007c7ea0) (1) Data frame handling\nI0104 14:18:43.516386 588 log.go:172] (0xc0007c7ea0) (1) Data frame sent\nI0104 14:18:43.516415 588 log.go:172] (0xc0007908f0) (0xc0004d15e0) Stream removed, broadcasting: 5\nI0104 14:18:43.516455 588 log.go:172] (0xc0007908f0) (0xc0007c7ea0) Stream removed, broadcasting: 1\nI0104 14:18:43.516485 588 log.go:172] (0xc0007908f0) Go away received\nI0104 14:18:43.517206 588 log.go:172] (0xc0007908f0) (0xc0007c7ea0) Stream removed, broadcasting: 1\nI0104 14:18:43.517221 588 log.go:172] (0xc0007908f0) (0xc000678780) Stream removed, broadcasting: 3\nI0104 14:18:43.517226 588 log.go:172] (0xc0007908f0) (0xc0004d15e0) Stream removed, broadcasting: 5\n" Jan 4 14:18:43.523: INFO: stdout: "" Jan 4 14:18:43.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6526 execpodbrqrh -- /bin/sh -x -c nc -zv -t -w 2 10.96.181.35 80' Jan 4 14:18:44.053: INFO: stderr: "I0104 14:18:43.677351 623 log.go:172] (0xc000a3d8c0) (0xc0009f8780) Create stream\nI0104 14:18:43.677458 623 log.go:172] (0xc000a3d8c0) (0xc0009f8780) Stream added, broadcasting: 1\nI0104 14:18:43.684588 623 log.go:172] (0xc000a3d8c0) Reply frame received for 1\nI0104 14:18:43.684615 623 log.go:172] (0xc000a3d8c0) (0xc0005e5ae0) Create stream\nI0104 14:18:43.684622 623 log.go:172] (0xc000a3d8c0) (0xc0005e5ae0) Stream added, broadcasting: 3\nI0104 14:18:43.686769 623 log.go:172] (0xc000a3d8c0) Reply frame received for 3\nI0104 14:18:43.686788 623 log.go:172] (0xc000a3d8c0) (0xc0005326e0) Create stream\nI0104 14:18:43.686794 623 log.go:172] (0xc000a3d8c0) (0xc0005326e0) Stream added, broadcasting: 5\nI0104 14:18:43.688440 623 log.go:172] (0xc000a3d8c0) Reply frame received for 5\nI0104 14:18:43.804608 623 log.go:172] (0xc000a3d8c0) Data frame received for 5\nI0104 14:18:43.804724 623 log.go:172] (0xc0005326e0) (5) Data frame handling\nI0104 14:18:43.804756 623 log.go:172] (0xc0005326e0) (5) Data frame sent\nI0104 14:18:43.804765 623 log.go:172] (0xc000a3d8c0) Data frame received for 5\nI0104 14:18:43.804779 623 log.go:172] (0xc0005326e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.181.35 80\nI0104 14:18:43.804799 623 log.go:172] (0xc0005326e0) (5) Data frame sent\nI0104 14:18:43.807906 623 log.go:172] (0xc000a3d8c0) Data frame received for 5\nI0104 14:18:43.807931 623 log.go:172] (0xc0005326e0) (5) Data frame handling\nI0104 14:18:43.807941 623 log.go:172] (0xc0005326e0) (5) Data frame sent\nConnection to 10.96.181.35 80 port [tcp/http] succeeded!\nI0104 14:18:44.039425 623 log.go:172] (0xc000a3d8c0) Data frame received for 1\nI0104 14:18:44.039483 623 log.go:172] (0xc000a3d8c0) (0xc0005e5ae0) Stream removed, broadcasting: 3\nI0104 14:18:44.039530 623 log.go:172] (0xc0009f8780) (1) Data frame handling\nI0104 14:18:44.039560 623 log.go:172] (0xc0009f8780) (1) Data frame sent\nI0104 14:18:44.039580 623 log.go:172] (0xc000a3d8c0) (0xc0009f8780) Stream removed, broadcasting: 1\nI0104 14:18:44.042301 623 log.go:172] (0xc000a3d8c0) (0xc0005326e0) Stream removed, broadcasting: 5\nI0104 14:18:44.042413 623 log.go:172] (0xc000a3d8c0) (0xc0009f8780) Stream removed, broadcasting: 1\nI0104 14:18:44.042480 623 log.go:172] (0xc000a3d8c0) (0xc0005e5ae0) Stream removed, broadcasting: 3\nI0104 14:18:44.042606 623 log.go:172] (0xc000a3d8c0) (0xc0005326e0) Stream removed, broadcasting: 5\n" Jan 4 14:18:44.053: INFO: stdout: "" Jan 4 14:18:44.053: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:18:44.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6526" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:24.577 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":70,"skipped":1336,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:18:44.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-ef17029a-c383-4313-9e36-6b6473865ecf STEP: Creating a pod to test consume configMaps Jan 4 14:18:44.231: INFO: Waiting up to 5m0s for pod "pod-configmaps-673b0406-4985-4f45-80fc-6a77de8e16a9" in namespace "configmap-2775" to be "success or failure" Jan 4 14:18:44.240: INFO: Pod "pod-configmaps-673b0406-4985-4f45-80fc-6a77de8e16a9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.674629ms Jan 4 14:18:46.256: INFO: Pod "pod-configmaps-673b0406-4985-4f45-80fc-6a77de8e16a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024917879s Jan 4 14:18:48.264: INFO: Pod "pod-configmaps-673b0406-4985-4f45-80fc-6a77de8e16a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032865973s Jan 4 14:18:50.270: INFO: Pod "pod-configmaps-673b0406-4985-4f45-80fc-6a77de8e16a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03887868s Jan 4 14:18:52.508: INFO: Pod "pod-configmaps-673b0406-4985-4f45-80fc-6a77de8e16a9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.277191941s Jan 4 14:18:54.518: INFO: Pod "pod-configmaps-673b0406-4985-4f45-80fc-6a77de8e16a9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.286905389s Jan 4 14:18:56.579: INFO: Pod "pod-configmaps-673b0406-4985-4f45-80fc-6a77de8e16a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.347807874s STEP: Saw pod success Jan 4 14:18:56.579: INFO: Pod "pod-configmaps-673b0406-4985-4f45-80fc-6a77de8e16a9" satisfied condition "success or failure" Jan 4 14:18:56.583: INFO: Trying to get logs from node jerma-node pod pod-configmaps-673b0406-4985-4f45-80fc-6a77de8e16a9 container configmap-volume-test: STEP: delete the pod Jan 4 14:18:56.734: INFO: Waiting for pod pod-configmaps-673b0406-4985-4f45-80fc-6a77de8e16a9 to disappear Jan 4 14:18:56.756: INFO: Pod pod-configmaps-673b0406-4985-4f45-80fc-6a77de8e16a9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:18:56.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2775" for this suite. • [SLOW TEST:12.657 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1341,"failed":0} [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:18:56.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Jan 4 14:18:56.872: INFO: Waiting up to 5m0s for pod "client-containers-a86b232b-1ae5-4e36-839f-39d6cdab0420" in namespace "containers-1670" to be "success or failure" Jan 4 14:18:56.879: INFO: Pod "client-containers-a86b232b-1ae5-4e36-839f-39d6cdab0420": Phase="Pending", Reason="", readiness=false. Elapsed: 7.275761ms Jan 4 14:18:58.884: INFO: Pod "client-containers-a86b232b-1ae5-4e36-839f-39d6cdab0420": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012765553s Jan 4 14:19:00.897: INFO: Pod "client-containers-a86b232b-1ae5-4e36-839f-39d6cdab0420": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025273925s Jan 4 14:19:02.918: INFO: Pod "client-containers-a86b232b-1ae5-4e36-839f-39d6cdab0420": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04606211s Jan 4 14:19:04.925: INFO: Pod "client-containers-a86b232b-1ae5-4e36-839f-39d6cdab0420": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052893787s Jan 4 14:19:06.933: INFO: Pod "client-containers-a86b232b-1ae5-4e36-839f-39d6cdab0420": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060828447s STEP: Saw pod success Jan 4 14:19:06.933: INFO: Pod "client-containers-a86b232b-1ae5-4e36-839f-39d6cdab0420" satisfied condition "success or failure" Jan 4 14:19:06.936: INFO: Trying to get logs from node jerma-node pod client-containers-a86b232b-1ae5-4e36-839f-39d6cdab0420 container test-container: STEP: delete the pod Jan 4 14:19:07.196: INFO: Waiting for pod client-containers-a86b232b-1ae5-4e36-839f-39d6cdab0420 to disappear Jan 4 14:19:07.265: INFO: Pod client-containers-a86b232b-1ae5-4e36-839f-39d6cdab0420 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:19:07.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1670" for this suite. • [SLOW TEST:10.517 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1341,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:19:07.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-9cf85846-f903-479a-91b7-85d5df3c0d25 STEP: Creating a pod to test consume secrets Jan 4 14:19:07.460: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c0ad356a-50da-4e93-b880-ee2c152c4fb4" in namespace "projected-3520" to be "success or failure" Jan 4 14:19:07.470: INFO: Pod "pod-projected-secrets-c0ad356a-50da-4e93-b880-ee2c152c4fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.457431ms Jan 4 14:19:09.479: INFO: Pod "pod-projected-secrets-c0ad356a-50da-4e93-b880-ee2c152c4fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018940088s Jan 4 14:19:11.487: INFO: Pod "pod-projected-secrets-c0ad356a-50da-4e93-b880-ee2c152c4fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027435051s Jan 4 14:19:13.499: INFO: Pod "pod-projected-secrets-c0ad356a-50da-4e93-b880-ee2c152c4fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038980244s Jan 4 14:19:15.506: INFO: Pod "pod-projected-secrets-c0ad356a-50da-4e93-b880-ee2c152c4fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046224532s Jan 4 14:19:17.511: INFO: Pod "pod-projected-secrets-c0ad356a-50da-4e93-b880-ee2c152c4fb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.051559394s STEP: Saw pod success Jan 4 14:19:17.512: INFO: Pod "pod-projected-secrets-c0ad356a-50da-4e93-b880-ee2c152c4fb4" satisfied condition "success or failure" Jan 4 14:19:17.514: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-c0ad356a-50da-4e93-b880-ee2c152c4fb4 container secret-volume-test: STEP: delete the pod Jan 4 14:19:17.557: INFO: Waiting for pod pod-projected-secrets-c0ad356a-50da-4e93-b880-ee2c152c4fb4 to disappear Jan 4 14:19:17.561: INFO: Pod pod-projected-secrets-c0ad356a-50da-4e93-b880-ee2c152c4fb4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:19:17.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3520" for this suite. • [SLOW TEST:10.291 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1347,"failed":0} SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:19:17.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 4 14:19:18.299: INFO: Pod name wrapped-volume-race-43010815-18f4-4c32-a518-d35e88b6a5ec: Found 0 pods out of 5 Jan 4 14:19:23.311: INFO: Pod name wrapped-volume-race-43010815-18f4-4c32-a518-d35e88b6a5ec: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-43010815-18f4-4c32-a518-d35e88b6a5ec in namespace emptydir-wrapper-1479, will wait for the garbage collector to delete the pods Jan 4 14:19:51.416: INFO: Deleting ReplicationController wrapped-volume-race-43010815-18f4-4c32-a518-d35e88b6a5ec took: 15.45034ms Jan 4 14:19:52.016: INFO: Terminating ReplicationController wrapped-volume-race-43010815-18f4-4c32-a518-d35e88b6a5ec pods took: 600.295673ms STEP: Creating RC which spawns configmap-volume pods Jan 4 14:20:13.525: INFO: Pod name wrapped-volume-race-49f47895-20f2-4909-a250-2ad28053e448: Found 0 pods out of 5 Jan 4 14:20:18.677: INFO: Pod name wrapped-volume-race-49f47895-20f2-4909-a250-2ad28053e448: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-49f47895-20f2-4909-a250-2ad28053e448 in namespace emptydir-wrapper-1479, will wait for the garbage collector to delete the pods Jan 4 14:20:56.852: INFO: Deleting ReplicationController wrapped-volume-race-49f47895-20f2-4909-a250-2ad28053e448 took: 11.733559ms Jan 4 14:20:57.453: INFO: Terminating ReplicationController wrapped-volume-race-49f47895-20f2-4909-a250-2ad28053e448 pods took: 600.352101ms STEP: Creating RC which spawns configmap-volume pods Jan 4 14:21:13.484: INFO: Pod name wrapped-volume-race-cdc34b70-8fe1-4e19-aa05-14234befada0: Found 0 pods out of 5 Jan 4 14:21:18.554: INFO: Pod name wrapped-volume-race-cdc34b70-8fe1-4e19-aa05-14234befada0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-cdc34b70-8fe1-4e19-aa05-14234befada0 in namespace emptydir-wrapper-1479, will wait for the garbage collector to delete the pods Jan 4 14:21:50.700: INFO: Deleting ReplicationController wrapped-volume-race-cdc34b70-8fe1-4e19-aa05-14234befada0 took: 34.993413ms Jan 4 14:21:51.100: INFO: Terminating ReplicationController wrapped-volume-race-cdc34b70-8fe1-4e19-aa05-14234befada0 pods took: 400.264566ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:22:06.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1479" for this suite. • [SLOW TEST:168.856 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":74,"skipped":1349,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:22:06.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-81f29e13-8405-421b-95d8-7bf5aadce923 STEP: Creating a pod to test consume secrets Jan 4 14:22:06.545: INFO: Waiting up to 5m0s for pod "pod-secrets-1ccb98b0-5729-4e96-939b-c18d11201269" in namespace "secrets-6801" to be "success or failure" Jan 4 14:22:06.552: INFO: Pod "pod-secrets-1ccb98b0-5729-4e96-939b-c18d11201269": Phase="Pending", Reason="", readiness=false. Elapsed: 5.967425ms Jan 4 14:22:08.558: INFO: Pod "pod-secrets-1ccb98b0-5729-4e96-939b-c18d11201269": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012013334s Jan 4 14:22:10.568: INFO: Pod "pod-secrets-1ccb98b0-5729-4e96-939b-c18d11201269": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022112578s Jan 4 14:22:12.616: INFO: Pod "pod-secrets-1ccb98b0-5729-4e96-939b-c18d11201269": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069962328s Jan 4 14:22:14.651: INFO: Pod "pod-secrets-1ccb98b0-5729-4e96-939b-c18d11201269": Phase="Running", Reason="", readiness=true. Elapsed: 8.105389595s Jan 4 14:22:16.683: INFO: Pod "pod-secrets-1ccb98b0-5729-4e96-939b-c18d11201269": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.137361946s STEP: Saw pod success Jan 4 14:22:16.683: INFO: Pod "pod-secrets-1ccb98b0-5729-4e96-939b-c18d11201269" satisfied condition "success or failure" Jan 4 14:22:16.694: INFO: Trying to get logs from node jerma-node pod pod-secrets-1ccb98b0-5729-4e96-939b-c18d11201269 container secret-volume-test: STEP: delete the pod Jan 4 14:22:16.984: INFO: Waiting for pod pod-secrets-1ccb98b0-5729-4e96-939b-c18d11201269 to disappear Jan 4 14:22:17.003: INFO: Pod pod-secrets-1ccb98b0-5729-4e96-939b-c18d11201269 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:22:17.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6801" for this suite. • [SLOW TEST:10.605 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1361,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:22:17.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:22:17.142: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:22:18.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4990" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":76,"skipped":1362,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:22:18.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 4 14:22:19.703: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 4 14:22:21.712: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744539, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744539, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744539, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744539, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:22:23.719: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744539, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744539, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744539, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744539, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:22:25.748: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744539, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744539, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744539, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744539, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:22:27.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744539, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744539, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744539, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744539, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 4 14:22:30.753: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:22:30.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4187-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:22:32.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4145" for this suite. STEP: Destroying namespace "webhook-4145-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.215 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":77,"skipped":1365,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:22:32.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Jan 4 14:22:33.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8447' Jan 4 14:22:33.590: INFO: stderr: "" Jan 4 14:22:33.590: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 4 14:22:33.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8447' Jan 4 14:22:33.829: INFO: stderr: "" Jan 4 14:22:33.829: INFO: stdout: "update-demo-nautilus-pkdsf update-demo-nautilus-qznf5 " Jan 4 14:22:33.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pkdsf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8447' Jan 4 14:22:34.001: INFO: stderr: "" Jan 4 14:22:34.002: INFO: stdout: "" Jan 4 14:22:34.002: INFO: update-demo-nautilus-pkdsf is created but not running Jan 4 14:22:39.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8447' Jan 4 14:22:39.149: INFO: stderr: "" Jan 4 14:22:39.149: INFO: stdout: "update-demo-nautilus-pkdsf update-demo-nautilus-qznf5 " Jan 4 14:22:39.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pkdsf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8447' Jan 4 14:22:39.381: INFO: stderr: "" Jan 4 14:22:39.381: INFO: stdout: "" Jan 4 14:22:39.381: INFO: update-demo-nautilus-pkdsf is created but not running Jan 4 14:22:44.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8447' Jan 4 14:22:44.983: INFO: stderr: "" Jan 4 14:22:44.983: INFO: stdout: "update-demo-nautilus-pkdsf update-demo-nautilus-qznf5 " Jan 4 14:22:44.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pkdsf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8447' Jan 4 14:22:45.272: INFO: stderr: "" Jan 4 14:22:45.272: INFO: stdout: "" Jan 4 14:22:45.272: INFO: update-demo-nautilus-pkdsf is created but not running Jan 4 14:22:50.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8447' Jan 4 14:22:50.417: INFO: stderr: "" Jan 4 14:22:50.417: INFO: stdout: "update-demo-nautilus-pkdsf update-demo-nautilus-qznf5 " Jan 4 14:22:50.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pkdsf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8447' Jan 4 14:22:50.534: INFO: stderr: "" Jan 4 14:22:50.534: INFO: stdout: "true" Jan 4 14:22:50.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pkdsf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8447' Jan 4 14:22:50.627: INFO: stderr: "" Jan 4 14:22:50.627: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 4 14:22:50.627: INFO: validating pod update-demo-nautilus-pkdsf Jan 4 14:22:50.650: INFO: got data: { "image": "nautilus.jpg" } Jan 4 14:22:50.650: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 4 14:22:50.650: INFO: update-demo-nautilus-pkdsf is verified up and running Jan 4 14:22:50.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qznf5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8447' Jan 4 14:22:50.724: INFO: stderr: "" Jan 4 14:22:50.724: INFO: stdout: "true" Jan 4 14:22:50.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qznf5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8447' Jan 4 14:22:50.830: INFO: stderr: "" Jan 4 14:22:50.830: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 4 14:22:50.830: INFO: validating pod update-demo-nautilus-qznf5 Jan 4 14:22:50.835: INFO: got data: { "image": "nautilus.jpg" } Jan 4 14:22:50.836: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 4 14:22:50.836: INFO: update-demo-nautilus-qznf5 is verified up and running STEP: scaling down the replication controller Jan 4 14:22:50.847: INFO: scanned /root for discovery docs: Jan 4 14:22:50.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8447' Jan 4 14:22:52.107: INFO: stderr: "" Jan 4 14:22:52.107: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 4 14:22:52.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8447' Jan 4 14:22:52.402: INFO: stderr: "" Jan 4 14:22:52.403: INFO: stdout: "update-demo-nautilus-pkdsf update-demo-nautilus-qznf5 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 4 14:22:57.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8447' Jan 4 14:22:57.588: INFO: stderr: "" Jan 4 14:22:57.588: INFO: stdout: "update-demo-nautilus-pkdsf update-demo-nautilus-qznf5 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 4 14:23:02.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8447' Jan 4 14:23:02.760: INFO: stderr: "" Jan 4 14:23:02.760: INFO: stdout: "update-demo-nautilus-pkdsf update-demo-nautilus-qznf5 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 4 14:23:07.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8447' Jan 4 14:23:07.929: INFO: stderr: "" Jan 4 14:23:07.929: INFO: stdout: "update-demo-nautilus-pkdsf " Jan 4 14:23:07.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pkdsf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8447' Jan 4 14:23:08.056: INFO: stderr: "" Jan 4 14:23:08.057: INFO: stdout: "true" Jan 4 14:23:08.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pkdsf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8447' Jan 4 14:23:08.195: INFO: stderr: "" Jan 4 14:23:08.195: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 4 14:23:08.195: INFO: validating pod update-demo-nautilus-pkdsf Jan 4 14:23:08.207: INFO: got data: { "image": "nautilus.jpg" } Jan 4 14:23:08.207: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 4 14:23:08.207: INFO: update-demo-nautilus-pkdsf is verified up and running STEP: scaling up the replication controller Jan 4 14:23:08.209: INFO: scanned /root for discovery docs: Jan 4 14:23:08.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8447' Jan 4 14:23:09.758: INFO: stderr: "" Jan 4 14:23:09.758: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 4 14:23:09.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8447' Jan 4 14:23:10.515: INFO: stderr: "" Jan 4 14:23:10.515: INFO: stdout: "update-demo-nautilus-pkdsf update-demo-nautilus-sfqkt " Jan 4 14:23:10.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pkdsf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8447' Jan 4 14:23:11.023: INFO: stderr: "" Jan 4 14:23:11.023: INFO: stdout: "true" Jan 4 14:23:11.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pkdsf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8447' Jan 4 14:23:11.254: INFO: stderr: "" Jan 4 14:23:11.254: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 4 14:23:11.254: INFO: validating pod update-demo-nautilus-pkdsf Jan 4 14:23:11.266: INFO: got data: { "image": "nautilus.jpg" } Jan 4 14:23:11.266: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 4 14:23:11.266: INFO: update-demo-nautilus-pkdsf is verified up and running Jan 4 14:23:11.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfqkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8447' Jan 4 14:23:11.359: INFO: stderr: "" Jan 4 14:23:11.359: INFO: stdout: "" Jan 4 14:23:11.359: INFO: update-demo-nautilus-sfqkt is created but not running Jan 4 14:23:16.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8447' Jan 4 14:23:16.506: INFO: stderr: "" Jan 4 14:23:16.506: INFO: stdout: "update-demo-nautilus-pkdsf update-demo-nautilus-sfqkt " Jan 4 14:23:16.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pkdsf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8447' Jan 4 14:23:16.599: INFO: stderr: "" Jan 4 14:23:16.599: INFO: stdout: "true" Jan 4 14:23:16.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pkdsf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8447' Jan 4 14:23:16.681: INFO: stderr: "" Jan 4 14:23:16.681: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 4 14:23:16.681: INFO: validating pod update-demo-nautilus-pkdsf Jan 4 14:23:16.688: INFO: got data: { "image": "nautilus.jpg" } Jan 4 14:23:16.688: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 4 14:23:16.688: INFO: update-demo-nautilus-pkdsf is verified up and running Jan 4 14:23:16.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfqkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8447' Jan 4 14:23:16.767: INFO: stderr: "" Jan 4 14:23:16.767: INFO: stdout: "" Jan 4 14:23:16.767: INFO: update-demo-nautilus-sfqkt is created but not running Jan 4 14:23:21.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8447' Jan 4 14:23:22.030: INFO: stderr: "" Jan 4 14:23:22.030: INFO: stdout: "update-demo-nautilus-pkdsf update-demo-nautilus-sfqkt " Jan 4 14:23:22.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pkdsf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8447' Jan 4 14:23:22.253: INFO: stderr: "" Jan 4 14:23:22.253: INFO: stdout: "true" Jan 4 14:23:22.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pkdsf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8447' Jan 4 14:23:22.453: INFO: stderr: "" Jan 4 14:23:22.453: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 4 14:23:22.453: INFO: validating pod update-demo-nautilus-pkdsf Jan 4 14:23:22.469: INFO: got data: { "image": "nautilus.jpg" } Jan 4 14:23:22.469: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 4 14:23:22.469: INFO: update-demo-nautilus-pkdsf is verified up and running Jan 4 14:23:22.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfqkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8447' Jan 4 14:23:22.555: INFO: stderr: "" Jan 4 14:23:22.555: INFO: stdout: "true" Jan 4 14:23:22.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfqkt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8447' Jan 4 14:23:22.676: INFO: stderr: "" Jan 4 14:23:22.676: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 4 14:23:22.676: INFO: validating pod update-demo-nautilus-sfqkt Jan 4 14:23:22.680: INFO: got data: { "image": "nautilus.jpg" } Jan 4 14:23:22.680: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 4 14:23:22.680: INFO: update-demo-nautilus-sfqkt is verified up and running STEP: using delete to clean up resources Jan 4 14:23:22.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8447' Jan 4 14:23:22.773: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 14:23:22.773: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 4 14:23:22.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8447' Jan 4 14:23:22.911: INFO: stderr: "No resources found in kubectl-8447 namespace.\n" Jan 4 14:23:22.911: INFO: stdout: "" Jan 4 14:23:22.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8447 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 4 14:23:22.987: INFO: stderr: "" Jan 4 14:23:22.987: INFO: stdout: "update-demo-nautilus-pkdsf\nupdate-demo-nautilus-sfqkt\n" Jan 4 14:23:23.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8447' Jan 4 14:23:24.689: INFO: stderr: "No resources found in kubectl-8447 namespace.\n" Jan 4 14:23:24.689: INFO: stdout: "" Jan 4 14:23:24.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8447 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 4 14:23:24.934: INFO: stderr: "" Jan 4 14:23:24.934: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:23:24.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8447" for this suite. • [SLOW TEST:52.053 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":78,"skipped":1375,"failed":0} SS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:23:25.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-b123bc10-2c52-479f-9295-2b936cdf244a STEP: Creating a pod to test consume secrets Jan 4 14:23:25.400: INFO: Waiting up to 5m0s for pod "pod-secrets-29170ee1-483f-4706-81ee-f6b817b02c70" in namespace "secrets-442" to be "success or failure" Jan 4 14:23:25.474: INFO: Pod "pod-secrets-29170ee1-483f-4706-81ee-f6b817b02c70": Phase="Pending", Reason="", readiness=false. Elapsed: 74.342092ms Jan 4 14:23:27.479: INFO: Pod "pod-secrets-29170ee1-483f-4706-81ee-f6b817b02c70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07904541s Jan 4 14:23:29.486: INFO: Pod "pod-secrets-29170ee1-483f-4706-81ee-f6b817b02c70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086468421s Jan 4 14:23:31.494: INFO: Pod "pod-secrets-29170ee1-483f-4706-81ee-f6b817b02c70": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094686102s Jan 4 14:23:33.501: INFO: Pod "pod-secrets-29170ee1-483f-4706-81ee-f6b817b02c70": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101197209s Jan 4 14:23:35.508: INFO: Pod "pod-secrets-29170ee1-483f-4706-81ee-f6b817b02c70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108479337s STEP: Saw pod success Jan 4 14:23:35.508: INFO: Pod "pod-secrets-29170ee1-483f-4706-81ee-f6b817b02c70" satisfied condition "success or failure" Jan 4 14:23:35.514: INFO: Trying to get logs from node jerma-node pod pod-secrets-29170ee1-483f-4706-81ee-f6b817b02c70 container secret-volume-test: STEP: delete the pod Jan 4 14:23:35.598: INFO: Waiting for pod pod-secrets-29170ee1-483f-4706-81ee-f6b817b02c70 to disappear Jan 4 14:23:35.613: INFO: Pod pod-secrets-29170ee1-483f-4706-81ee-f6b817b02c70 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:23:35.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-442" for this suite. STEP: Destroying namespace "secret-namespace-5488" for this suite. • [SLOW TEST:10.698 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1377,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:23:35.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:23:35.893: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:23:36.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9157" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":80,"skipped":1398,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:23:36.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 4 14:23:36.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5307' Jan 4 14:23:36.988: INFO: stderr: "" Jan 4 14:23:36.988: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846 Jan 4 14:23:37.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5307' Jan 4 14:23:42.350: INFO: stderr: "" Jan 4 14:23:42.350: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:23:42.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5307" for this suite. • [SLOW TEST:5.731 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":81,"skipped":1410,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:23:42.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 4 14:23:42.981: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jan 4 14:23:44.994: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744623, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744623, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744623, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744622, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:23:47.003: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744623, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744623, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744623, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744622, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:23:48.999: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744623, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744623, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744623, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744622, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 4 14:23:52.038: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jan 4 14:24:03.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-851 to-be-attached-pod -i -c=container1' Jan 4 14:24:03.448: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:24:03.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-851" for this suite. STEP: Destroying namespace "webhook-851-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.390 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":82,"skipped":1422,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:24:03.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:24:03.901: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jan 4 14:24:06.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9064 create -f -' Jan 4 14:24:11.154: INFO: stderr: "" Jan 4 14:24:11.154: INFO: stdout: "e2e-test-crd-publish-openapi-8063-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 4 14:24:11.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9064 delete e2e-test-crd-publish-openapi-8063-crds test-foo' Jan 4 14:24:11.312: INFO: stderr: "" Jan 4 14:24:11.312: INFO: stdout: "e2e-test-crd-publish-openapi-8063-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jan 4 14:24:11.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9064 apply -f -' Jan 4 14:24:11.755: INFO: stderr: "" Jan 4 14:24:11.755: INFO: stdout: "e2e-test-crd-publish-openapi-8063-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 4 14:24:11.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9064 delete e2e-test-crd-publish-openapi-8063-crds test-foo' Jan 4 14:24:11.936: INFO: stderr: "" Jan 4 14:24:11.937: INFO: stdout: "e2e-test-crd-publish-openapi-8063-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jan 4 14:24:11.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9064 create -f -' Jan 4 14:24:12.328: INFO: rc: 1 Jan 4 14:24:12.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9064 apply -f -' Jan 4 14:24:12.707: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jan 4 14:24:12.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9064 create -f -' Jan 4 14:24:13.099: INFO: rc: 1 Jan 4 14:24:13.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9064 apply -f -' Jan 4 14:24:13.403: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jan 4 14:24:13.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8063-crds' Jan 4 14:24:13.813: INFO: stderr: "" Jan 4 14:24:13.813: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8063-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jan 4 14:24:13.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8063-crds.metadata' Jan 4 14:24:14.054: INFO: stderr: "" Jan 4 14:24:14.054: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8063-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jan 4 14:24:14.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8063-crds.spec' Jan 4 14:24:14.534: INFO: stderr: "" Jan 4 14:24:14.535: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8063-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jan 4 14:24:14.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8063-crds.spec.bars' Jan 4 14:24:14.764: INFO: stderr: "" Jan 4 14:24:14.764: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8063-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jan 4 14:24:14.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8063-crds.spec.bars2' Jan 4 14:24:15.087: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:24:17.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9064" for this suite. • [SLOW TEST:14.235 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":83,"skipped":1520,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:24:17.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 4 14:24:18.298: INFO: Waiting up to 5m0s for pod "downwardapi-volume-844ec137-87b7-4a69-b830-2e5e2e7ec3e8" in namespace "projected-3602" to be "success or failure" Jan 4 14:24:18.402: INFO: Pod "downwardapi-volume-844ec137-87b7-4a69-b830-2e5e2e7ec3e8": Phase="Pending", Reason="", readiness=false. Elapsed: 104.104081ms Jan 4 14:24:20.408: INFO: Pod "downwardapi-volume-844ec137-87b7-4a69-b830-2e5e2e7ec3e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109748688s Jan 4 14:24:22.412: INFO: Pod "downwardapi-volume-844ec137-87b7-4a69-b830-2e5e2e7ec3e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113826233s Jan 4 14:24:24.418: INFO: Pod "downwardapi-volume-844ec137-87b7-4a69-b830-2e5e2e7ec3e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120610475s Jan 4 14:24:26.426: INFO: Pod "downwardapi-volume-844ec137-87b7-4a69-b830-2e5e2e7ec3e8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128107952s Jan 4 14:24:28.437: INFO: Pod "downwardapi-volume-844ec137-87b7-4a69-b830-2e5e2e7ec3e8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.139528207s Jan 4 14:24:30.444: INFO: Pod "downwardapi-volume-844ec137-87b7-4a69-b830-2e5e2e7ec3e8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.146119582s Jan 4 14:24:32.452: INFO: Pod "downwardapi-volume-844ec137-87b7-4a69-b830-2e5e2e7ec3e8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.1538852s Jan 4 14:24:34.461: INFO: Pod "downwardapi-volume-844ec137-87b7-4a69-b830-2e5e2e7ec3e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.163382919s STEP: Saw pod success Jan 4 14:24:34.461: INFO: Pod "downwardapi-volume-844ec137-87b7-4a69-b830-2e5e2e7ec3e8" satisfied condition "success or failure" Jan 4 14:24:34.465: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-844ec137-87b7-4a69-b830-2e5e2e7ec3e8 container client-container: STEP: delete the pod Jan 4 14:24:34.666: INFO: Waiting for pod downwardapi-volume-844ec137-87b7-4a69-b830-2e5e2e7ec3e8 to disappear Jan 4 14:24:34.677: INFO: Pod downwardapi-volume-844ec137-87b7-4a69-b830-2e5e2e7ec3e8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:24:34.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3602" for this suite. • [SLOW TEST:16.702 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1539,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:24:34.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 4 14:24:34.826: INFO: Waiting up to 5m0s for pod "pod-365147ab-71d5-40ec-b050-08bb7375be8c" in namespace "emptydir-3612" to be "success or failure" Jan 4 14:24:34.839: INFO: Pod "pod-365147ab-71d5-40ec-b050-08bb7375be8c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.838511ms Jan 4 14:24:36.844: INFO: Pod "pod-365147ab-71d5-40ec-b050-08bb7375be8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018331126s Jan 4 14:24:38.849: INFO: Pod "pod-365147ab-71d5-40ec-b050-08bb7375be8c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022976502s Jan 4 14:24:40.855: INFO: Pod "pod-365147ab-71d5-40ec-b050-08bb7375be8c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029221622s Jan 4 14:24:42.867: INFO: Pod "pod-365147ab-71d5-40ec-b050-08bb7375be8c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041053612s Jan 4 14:24:44.875: INFO: Pod "pod-365147ab-71d5-40ec-b050-08bb7375be8c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.04880102s Jan 4 14:24:46.878: INFO: Pod "pod-365147ab-71d5-40ec-b050-08bb7375be8c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.052457089s Jan 4 14:24:48.888: INFO: Pod "pod-365147ab-71d5-40ec-b050-08bb7375be8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.062443315s STEP: Saw pod success Jan 4 14:24:48.889: INFO: Pod "pod-365147ab-71d5-40ec-b050-08bb7375be8c" satisfied condition "success or failure" Jan 4 14:24:48.902: INFO: Trying to get logs from node jerma-node pod pod-365147ab-71d5-40ec-b050-08bb7375be8c container test-container: STEP: delete the pod Jan 4 14:24:49.467: INFO: Waiting for pod pod-365147ab-71d5-40ec-b050-08bb7375be8c to disappear Jan 4 14:24:49.479: INFO: Pod pod-365147ab-71d5-40ec-b050-08bb7375be8c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:24:49.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3612" for this suite. • [SLOW TEST:14.853 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1542,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:24:49.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 4 14:24:49.806: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2cbdbff4-fccd-49bc-be24-d8fc3e757aa4" in namespace "downward-api-8810" to be "success or failure" Jan 4 14:24:49.842: INFO: Pod "downwardapi-volume-2cbdbff4-fccd-49bc-be24-d8fc3e757aa4": Phase="Pending", Reason="", readiness=false. Elapsed: 35.772497ms Jan 4 14:24:51.850: INFO: Pod "downwardapi-volume-2cbdbff4-fccd-49bc-be24-d8fc3e757aa4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043907809s Jan 4 14:24:53.859: INFO: Pod "downwardapi-volume-2cbdbff4-fccd-49bc-be24-d8fc3e757aa4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0529683s Jan 4 14:24:55.882: INFO: Pod "downwardapi-volume-2cbdbff4-fccd-49bc-be24-d8fc3e757aa4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07594989s Jan 4 14:24:57.889: INFO: Pod "downwardapi-volume-2cbdbff4-fccd-49bc-be24-d8fc3e757aa4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082883069s Jan 4 14:24:59.921: INFO: Pod "downwardapi-volume-2cbdbff4-fccd-49bc-be24-d8fc3e757aa4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.114639782s Jan 4 14:25:01.989: INFO: Pod "downwardapi-volume-2cbdbff4-fccd-49bc-be24-d8fc3e757aa4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.183114279s STEP: Saw pod success Jan 4 14:25:01.989: INFO: Pod "downwardapi-volume-2cbdbff4-fccd-49bc-be24-d8fc3e757aa4" satisfied condition "success or failure" Jan 4 14:25:01.994: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2cbdbff4-fccd-49bc-be24-d8fc3e757aa4 container client-container: STEP: delete the pod Jan 4 14:25:02.060: INFO: Waiting for pod downwardapi-volume-2cbdbff4-fccd-49bc-be24-d8fc3e757aa4 to disappear Jan 4 14:25:02.082: INFO: Pod downwardapi-volume-2cbdbff4-fccd-49bc-be24-d8fc3e757aa4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:25:02.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8810" for this suite. • [SLOW TEST:12.600 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1557,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:25:02.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:25:13.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7724" for this suite. • [SLOW TEST:11.489 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":87,"skipped":1559,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:25:13.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:25:13.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 4 14:25:13.966: INFO: stderr: "" Jan 4 14:25:13.966: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:10:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:25:13.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3656" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":88,"skipped":1560,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:25:13.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2322 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-2322 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2322 Jan 4 14:25:14.213: INFO: Found 0 stateful pods, waiting for 1 Jan 4 14:25:24.219: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 4 14:25:24.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 4 14:25:24.697: INFO: stderr: "I0104 14:25:24.377195 1675 log.go:172] (0xc000a4ee70) (0xc000b4cb40) Create stream\nI0104 14:25:24.377281 1675 log.go:172] (0xc000a4ee70) (0xc000b4cb40) Stream added, broadcasting: 1\nI0104 14:25:24.383017 1675 log.go:172] (0xc000a4ee70) Reply frame received for 1\nI0104 14:25:24.383046 1675 log.go:172] (0xc000a4ee70) (0xc000ae2500) Create stream\nI0104 14:25:24.383060 1675 log.go:172] (0xc000a4ee70) (0xc000ae2500) Stream added, broadcasting: 3\nI0104 14:25:24.384141 1675 log.go:172] (0xc000a4ee70) Reply frame received for 3\nI0104 14:25:24.384157 1675 log.go:172] (0xc000a4ee70) (0xc000ad46e0) Create stream\nI0104 14:25:24.384162 1675 log.go:172] (0xc000a4ee70) (0xc000ad46e0) Stream added, broadcasting: 5\nI0104 14:25:24.385516 1675 log.go:172] (0xc000a4ee70) Reply frame received for 5\nI0104 14:25:24.482973 1675 log.go:172] (0xc000a4ee70) Data frame received for 5\nI0104 14:25:24.483015 1675 log.go:172] (0xc000ad46e0) (5) Data frame handling\nI0104 14:25:24.483025 1675 log.go:172] (0xc000ad46e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0104 14:25:24.538524 1675 log.go:172] (0xc000a4ee70) Data frame received for 3\nI0104 14:25:24.538591 1675 log.go:172] (0xc000ae2500) (3) Data frame handling\nI0104 14:25:24.538603 1675 log.go:172] (0xc000ae2500) (3) Data frame sent\nI0104 14:25:24.691594 1675 log.go:172] (0xc000a4ee70) (0xc000ae2500) Stream removed, broadcasting: 3\nI0104 14:25:24.691662 1675 log.go:172] (0xc000a4ee70) Data frame received for 1\nI0104 14:25:24.691693 1675 log.go:172] (0xc000b4cb40) (1) Data frame handling\nI0104 14:25:24.691721 1675 log.go:172] (0xc000b4cb40) (1) Data frame sent\nI0104 14:25:24.691731 1675 log.go:172] (0xc000a4ee70) (0xc000b4cb40) Stream removed, broadcasting: 1\nI0104 14:25:24.691810 1675 log.go:172] (0xc000a4ee70) (0xc000ad46e0) Stream removed, broadcasting: 5\nI0104 14:25:24.691831 1675 log.go:172] (0xc000a4ee70) Go away received\nI0104 14:25:24.692084 1675 log.go:172] (0xc000a4ee70) (0xc000b4cb40) Stream removed, broadcasting: 1\nI0104 14:25:24.692096 1675 log.go:172] (0xc000a4ee70) (0xc000ae2500) Stream removed, broadcasting: 3\nI0104 14:25:24.692102 1675 log.go:172] (0xc000a4ee70) (0xc000ad46e0) Stream removed, broadcasting: 5\n" Jan 4 14:25:24.697: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 4 14:25:24.697: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 4 14:25:24.744: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 4 14:25:34.750: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 4 14:25:34.750: INFO: Waiting for statefulset status.replicas updated to 0 Jan 4 14:25:34.783: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 14:25:34.783: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC }] Jan 4 14:25:34.783: INFO: ss-1 Pending [] Jan 4 14:25:34.783: INFO: Jan 4 14:25:34.783: INFO: StatefulSet ss has not reached scale 3, at 2 Jan 4 14:25:35.794: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.981955382s Jan 4 14:25:36.802: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.97067238s Jan 4 14:25:37.812: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.962764266s Jan 4 14:25:38.818: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.953168349s Jan 4 14:25:40.389: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.947526621s Jan 4 14:25:41.530: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.376561684s Jan 4 14:25:42.543: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.234886896s Jan 4 14:25:43.549: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.222109693s Jan 4 14:25:44.557: INFO: Verifying statefulset ss doesn't scale past 3 for another 216.477044ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2322 Jan 4 14:25:45.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:25:46.210: INFO: stderr: "I0104 14:25:45.831839 1694 log.go:172] (0xc00001adc0) (0xc00062c1e0) Create stream\nI0104 14:25:45.832027 1694 log.go:172] (0xc00001adc0) (0xc00062c1e0) Stream added, broadcasting: 1\nI0104 14:25:45.842820 1694 log.go:172] (0xc00001adc0) Reply frame received for 1\nI0104 14:25:45.842872 1694 log.go:172] (0xc00001adc0) (0xc00065bb80) Create stream\nI0104 14:25:45.842890 1694 log.go:172] (0xc00001adc0) (0xc00065bb80) Stream added, broadcasting: 3\nI0104 14:25:45.844529 1694 log.go:172] (0xc00001adc0) Reply frame received for 3\nI0104 14:25:45.844557 1694 log.go:172] (0xc00001adc0) (0xc00074b540) Create stream\nI0104 14:25:45.844570 1694 log.go:172] (0xc00001adc0) (0xc00074b540) Stream added, broadcasting: 5\nI0104 14:25:45.852443 1694 log.go:172] (0xc00001adc0) Reply frame received for 5\nI0104 14:25:46.029434 1694 log.go:172] (0xc00001adc0) Data frame received for 5\nI0104 14:25:46.029672 1694 log.go:172] (0xc00074b540) (5) Data frame handling\nI0104 14:25:46.029721 1694 log.go:172] (0xc00074b540) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0104 14:25:46.029831 1694 log.go:172] (0xc00001adc0) Data frame received for 3\nI0104 14:25:46.029857 1694 log.go:172] (0xc00065bb80) (3) Data frame handling\nI0104 14:25:46.029876 1694 log.go:172] (0xc00065bb80) (3) Data frame sent\nI0104 14:25:46.200178 1694 log.go:172] (0xc00001adc0) (0xc00065bb80) Stream removed, broadcasting: 3\nI0104 14:25:46.200372 1694 log.go:172] (0xc00001adc0) Data frame received for 1\nI0104 14:25:46.200391 1694 log.go:172] (0xc00062c1e0) (1) Data frame handling\nI0104 14:25:46.200415 1694 log.go:172] (0xc00062c1e0) (1) Data frame sent\nI0104 14:25:46.200477 1694 log.go:172] (0xc00001adc0) (0xc00074b540) Stream removed, broadcasting: 5\nI0104 14:25:46.200557 1694 log.go:172] (0xc00001adc0) (0xc00062c1e0) Stream removed, broadcasting: 1\nI0104 14:25:46.200582 1694 log.go:172] (0xc00001adc0) Go away received\nI0104 14:25:46.201871 1694 log.go:172] (0xc00001adc0) (0xc00062c1e0) Stream removed, broadcasting: 1\nI0104 14:25:46.201888 1694 log.go:172] (0xc00001adc0) (0xc00065bb80) Stream removed, broadcasting: 3\nI0104 14:25:46.201901 1694 log.go:172] (0xc00001adc0) (0xc00074b540) Stream removed, broadcasting: 5\n" Jan 4 14:25:46.210: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 4 14:25:46.210: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 4 14:25:46.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:25:46.456: INFO: stderr: "I0104 14:25:46.309706 1715 log.go:172] (0xc000904160) (0xc0008a8140) Create stream\nI0104 14:25:46.310048 1715 log.go:172] (0xc000904160) (0xc0008a8140) Stream added, broadcasting: 1\nI0104 14:25:46.315521 1715 log.go:172] (0xc000904160) Reply frame received for 1\nI0104 14:25:46.315597 1715 log.go:172] (0xc000904160) (0xc0005c0780) Create stream\nI0104 14:25:46.315635 1715 log.go:172] (0xc000904160) (0xc0005c0780) Stream added, broadcasting: 3\nI0104 14:25:46.316842 1715 log.go:172] (0xc000904160) Reply frame received for 3\nI0104 14:25:46.316864 1715 log.go:172] (0xc000904160) (0xc00031d540) Create stream\nI0104 14:25:46.316869 1715 log.go:172] (0xc000904160) (0xc00031d540) Stream added, broadcasting: 5\nI0104 14:25:46.317699 1715 log.go:172] (0xc000904160) Reply frame received for 5\nI0104 14:25:46.379912 1715 log.go:172] (0xc000904160) Data frame received for 3\nI0104 14:25:46.379960 1715 log.go:172] (0xc0005c0780) (3) Data frame handling\nI0104 14:25:46.379977 1715 log.go:172] (0xc0005c0780) (3) Data frame sent\nI0104 14:25:46.380012 1715 log.go:172] (0xc000904160) Data frame received for 5\nI0104 14:25:46.380021 1715 log.go:172] (0xc00031d540) (5) Data frame handling\nI0104 14:25:46.380030 1715 log.go:172] (0xc00031d540) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0104 14:25:46.452386 1715 log.go:172] (0xc000904160) (0xc0005c0780) Stream removed, broadcasting: 3\nI0104 14:25:46.452514 1715 log.go:172] (0xc000904160) Data frame received for 1\nI0104 14:25:46.452529 1715 log.go:172] (0xc0008a8140) (1) Data frame handling\nI0104 14:25:46.452538 1715 log.go:172] (0xc0008a8140) (1) Data frame sent\nI0104 14:25:46.452578 1715 log.go:172] (0xc000904160) (0xc00031d540) Stream removed, broadcasting: 5\nI0104 14:25:46.452611 1715 log.go:172] (0xc000904160) (0xc0008a8140) Stream removed, broadcasting: 1\nI0104 14:25:46.452633 1715 log.go:172] (0xc000904160) Go away received\nI0104 14:25:46.452833 1715 log.go:172] (0xc000904160) (0xc0008a8140) Stream removed, broadcasting: 1\nI0104 14:25:46.452849 1715 log.go:172] (0xc000904160) (0xc0005c0780) Stream removed, broadcasting: 3\nI0104 14:25:46.452854 1715 log.go:172] (0xc000904160) (0xc00031d540) Stream removed, broadcasting: 5\n" Jan 4 14:25:46.456: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 4 14:25:46.456: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 4 14:25:46.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:25:46.947: INFO: stderr: "I0104 14:25:46.699310 1733 log.go:172] (0xc000ada370) (0xc000a84280) Create stream\nI0104 14:25:46.699410 1733 log.go:172] (0xc000ada370) (0xc000a84280) Stream added, broadcasting: 1\nI0104 14:25:46.704912 1733 log.go:172] (0xc000ada370) Reply frame received for 1\nI0104 14:25:46.704973 1733 log.go:172] (0xc000ada370) (0xc000a84320) Create stream\nI0104 14:25:46.704982 1733 log.go:172] (0xc000ada370) (0xc000a84320) Stream added, broadcasting: 3\nI0104 14:25:46.707235 1733 log.go:172] (0xc000ada370) Reply frame received for 3\nI0104 14:25:46.707258 1733 log.go:172] (0xc000ada370) (0xc000a843c0) Create stream\nI0104 14:25:46.707267 1733 log.go:172] (0xc000ada370) (0xc000a843c0) Stream added, broadcasting: 5\nI0104 14:25:46.708707 1733 log.go:172] (0xc000ada370) Reply frame received for 5\nI0104 14:25:46.804999 1733 log.go:172] (0xc000ada370) Data frame received for 5\nI0104 14:25:46.805075 1733 log.go:172] (0xc000a843c0) (5) Data frame handling\nI0104 14:25:46.805100 1733 log.go:172] (0xc000a843c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0104 14:25:46.807158 1733 log.go:172] (0xc000ada370) Data frame received for 5\nI0104 14:25:46.807397 1733 log.go:172] (0xc000a843c0) (5) Data frame handling\nI0104 14:25:46.807410 1733 log.go:172] (0xc000a843c0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0104 14:25:46.807420 1733 log.go:172] (0xc000ada370) Data frame received for 3\nI0104 14:25:46.807462 1733 log.go:172] (0xc000a84320) (3) Data frame handling\nI0104 14:25:46.807494 1733 log.go:172] (0xc000a84320) (3) Data frame sent\nI0104 14:25:46.937341 1733 log.go:172] (0xc000ada370) (0xc000a84320) Stream removed, broadcasting: 3\nI0104 14:25:46.937528 1733 log.go:172] (0xc000ada370) Data frame received for 1\nI0104 14:25:46.937561 1733 log.go:172] (0xc000ada370) (0xc000a843c0) Stream removed, broadcasting: 5\nI0104 14:25:46.937598 1733 log.go:172] (0xc000a84280) (1) Data frame handling\nI0104 14:25:46.937625 1733 log.go:172] (0xc000a84280) (1) Data frame sent\nI0104 14:25:46.937641 1733 log.go:172] (0xc000ada370) (0xc000a84280) Stream removed, broadcasting: 1\nI0104 14:25:46.937658 1733 log.go:172] (0xc000ada370) Go away received\nI0104 14:25:46.938561 1733 log.go:172] (0xc000ada370) (0xc000a84280) Stream removed, broadcasting: 1\nI0104 14:25:46.938586 1733 log.go:172] (0xc000ada370) (0xc000a84320) Stream removed, broadcasting: 3\nI0104 14:25:46.938602 1733 log.go:172] (0xc000ada370) (0xc000a843c0) Stream removed, broadcasting: 5\n" Jan 4 14:25:46.948: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 4 14:25:46.948: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 4 14:25:46.952: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 4 14:25:46.952: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 4 14:25:46.952: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 4 14:25:46.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 4 14:25:47.294: INFO: stderr: "I0104 14:25:47.077393 1753 log.go:172] (0xc000968630) (0xc0006f7540) Create stream\nI0104 14:25:47.077484 1753 log.go:172] (0xc000968630) (0xc0006f7540) Stream added, broadcasting: 1\nI0104 14:25:47.083417 1753 log.go:172] (0xc000968630) Reply frame received for 1\nI0104 14:25:47.083498 1753 log.go:172] (0xc000968630) (0xc0006ddae0) Create stream\nI0104 14:25:47.083505 1753 log.go:172] (0xc000968630) (0xc0006ddae0) Stream added, broadcasting: 3\nI0104 14:25:47.085188 1753 log.go:172] (0xc000968630) Reply frame received for 3\nI0104 14:25:47.085219 1753 log.go:172] (0xc000968630) (0xc00090c000) Create stream\nI0104 14:25:47.085230 1753 log.go:172] (0xc000968630) (0xc00090c000) Stream added, broadcasting: 5\nI0104 14:25:47.086372 1753 log.go:172] (0xc000968630) Reply frame received for 5\nI0104 14:25:47.163752 1753 log.go:172] (0xc000968630) Data frame received for 5\nI0104 14:25:47.163789 1753 log.go:172] (0xc00090c000) (5) Data frame handling\nI0104 14:25:47.163809 1753 log.go:172] (0xc00090c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0104 14:25:47.163838 1753 log.go:172] (0xc000968630) Data frame received for 3\nI0104 14:25:47.163849 1753 log.go:172] (0xc0006ddae0) (3) Data frame handling\nI0104 14:25:47.163865 1753 log.go:172] (0xc0006ddae0) (3) Data frame sent\nI0104 14:25:47.285036 1753 log.go:172] (0xc000968630) (0xc0006ddae0) Stream removed, broadcasting: 3\nI0104 14:25:47.285251 1753 log.go:172] (0xc000968630) Data frame received for 1\nI0104 14:25:47.285322 1753 log.go:172] (0xc0006f7540) (1) Data frame handling\nI0104 14:25:47.285378 1753 log.go:172] (0xc0006f7540) (1) Data frame sent\nI0104 14:25:47.285474 1753 log.go:172] (0xc000968630) (0xc0006f7540) Stream removed, broadcasting: 1\nI0104 14:25:47.285685 1753 log.go:172] (0xc000968630) (0xc00090c000) Stream removed, broadcasting: 5\nI0104 14:25:47.285765 1753 log.go:172] (0xc000968630) Go away received\nI0104 14:25:47.286317 1753 log.go:172] (0xc000968630) (0xc0006f7540) Stream removed, broadcasting: 1\nI0104 14:25:47.286376 1753 log.go:172] (0xc000968630) (0xc0006ddae0) Stream removed, broadcasting: 3\nI0104 14:25:47.286416 1753 log.go:172] (0xc000968630) (0xc00090c000) Stream removed, broadcasting: 5\n" Jan 4 14:25:47.295: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 4 14:25:47.295: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 4 14:25:47.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 4 14:25:47.543: INFO: stderr: "I0104 14:25:47.395860 1772 log.go:172] (0xc00011c2c0) (0xc00065fb80) Create stream\nI0104 14:25:47.395956 1772 log.go:172] (0xc00011c2c0) (0xc00065fb80) Stream added, broadcasting: 1\nI0104 14:25:47.398485 1772 log.go:172] (0xc00011c2c0) Reply frame received for 1\nI0104 14:25:47.398505 1772 log.go:172] (0xc00011c2c0) (0xc000634640) Create stream\nI0104 14:25:47.398511 1772 log.go:172] (0xc00011c2c0) (0xc000634640) Stream added, broadcasting: 3\nI0104 14:25:47.399613 1772 log.go:172] (0xc00011c2c0) Reply frame received for 3\nI0104 14:25:47.399632 1772 log.go:172] (0xc00011c2c0) (0xc00065fc20) Create stream\nI0104 14:25:47.399640 1772 log.go:172] (0xc00011c2c0) (0xc00065fc20) Stream added, broadcasting: 5\nI0104 14:25:47.400700 1772 log.go:172] (0xc00011c2c0) Reply frame received for 5\nI0104 14:25:47.454622 1772 log.go:172] (0xc00011c2c0) Data frame received for 5\nI0104 14:25:47.454660 1772 log.go:172] (0xc00065fc20) (5) Data frame handling\nI0104 14:25:47.454674 1772 log.go:172] (0xc00065fc20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0104 14:25:47.481784 1772 log.go:172] (0xc00011c2c0) Data frame received for 3\nI0104 14:25:47.481907 1772 log.go:172] (0xc000634640) (3) Data frame handling\nI0104 14:25:47.481942 1772 log.go:172] (0xc000634640) (3) Data frame sent\nI0104 14:25:47.537313 1772 log.go:172] (0xc00011c2c0) Data frame received for 1\nI0104 14:25:47.537348 1772 log.go:172] (0xc00065fb80) (1) Data frame handling\nI0104 14:25:47.537361 1772 log.go:172] (0xc00065fb80) (1) Data frame sent\nI0104 14:25:47.537977 1772 log.go:172] (0xc00011c2c0) (0xc00065fc20) Stream removed, broadcasting: 5\nI0104 14:25:47.538015 1772 log.go:172] (0xc00011c2c0) (0xc00065fb80) Stream removed, broadcasting: 1\nI0104 14:25:47.538276 1772 log.go:172] (0xc00011c2c0) (0xc000634640) Stream removed, broadcasting: 3\nI0104 14:25:47.538302 1772 log.go:172] (0xc00011c2c0) (0xc00065fb80) Stream removed, broadcasting: 1\nI0104 14:25:47.538318 1772 log.go:172] (0xc00011c2c0) (0xc000634640) Stream removed, broadcasting: 3\nI0104 14:25:47.538328 1772 log.go:172] (0xc00011c2c0) (0xc00065fc20) Stream removed, broadcasting: 5\nI0104 14:25:47.538449 1772 log.go:172] (0xc00011c2c0) Go away received\n" Jan 4 14:25:47.544: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 4 14:25:47.544: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 4 14:25:47.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 4 14:25:48.026: INFO: stderr: "I0104 14:25:47.737720 1789 log.go:172] (0xc0003c2dc0) (0xc0006fdb80) Create stream\nI0104 14:25:47.737835 1789 log.go:172] (0xc0003c2dc0) (0xc0006fdb80) Stream added, broadcasting: 1\nI0104 14:25:47.749473 1789 log.go:172] (0xc0003c2dc0) Reply frame received for 1\nI0104 14:25:47.749521 1789 log.go:172] (0xc0003c2dc0) (0xc000b14000) Create stream\nI0104 14:25:47.749535 1789 log.go:172] (0xc0003c2dc0) (0xc000b14000) Stream added, broadcasting: 3\nI0104 14:25:47.750800 1789 log.go:172] (0xc0003c2dc0) Reply frame received for 3\nI0104 14:25:47.750822 1789 log.go:172] (0xc0003c2dc0) (0xc0006fdd60) Create stream\nI0104 14:25:47.750831 1789 log.go:172] (0xc0003c2dc0) (0xc0006fdd60) Stream added, broadcasting: 5\nI0104 14:25:47.756138 1789 log.go:172] (0xc0003c2dc0) Reply frame received for 5\nI0104 14:25:47.830922 1789 log.go:172] (0xc0003c2dc0) Data frame received for 5\nI0104 14:25:47.830976 1789 log.go:172] (0xc0006fdd60) (5) Data frame handling\nI0104 14:25:47.831001 1789 log.go:172] (0xc0006fdd60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0104 14:25:47.891601 1789 log.go:172] (0xc0003c2dc0) Data frame received for 3\nI0104 14:25:47.891652 1789 log.go:172] (0xc000b14000) (3) Data frame handling\nI0104 14:25:47.891703 1789 log.go:172] (0xc000b14000) (3) Data frame sent\nI0104 14:25:48.008702 1789 log.go:172] (0xc0003c2dc0) (0xc000b14000) Stream removed, broadcasting: 3\nI0104 14:25:48.009095 1789 log.go:172] (0xc0003c2dc0) Data frame received for 1\nI0104 14:25:48.009578 1789 log.go:172] (0xc0003c2dc0) (0xc0006fdd60) Stream removed, broadcasting: 5\nI0104 14:25:48.009720 1789 log.go:172] (0xc0006fdb80) (1) Data frame handling\nI0104 14:25:48.009783 1789 log.go:172] (0xc0006fdb80) (1) Data frame sent\nI0104 14:25:48.010038 1789 log.go:172] (0xc0003c2dc0) (0xc0006fdb80) Stream removed, broadcasting: 1\nI0104 14:25:48.010161 1789 log.go:172] (0xc0003c2dc0) Go away received\nI0104 14:25:48.011405 1789 log.go:172] (0xc0003c2dc0) (0xc0006fdb80) Stream removed, broadcasting: 1\nI0104 14:25:48.011443 1789 log.go:172] (0xc0003c2dc0) (0xc000b14000) Stream removed, broadcasting: 3\nI0104 14:25:48.011462 1789 log.go:172] (0xc0003c2dc0) (0xc0006fdd60) Stream removed, broadcasting: 5\n" Jan 4 14:25:48.026: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 4 14:25:48.026: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 4 14:25:48.026: INFO: Waiting for statefulset status.replicas updated to 0 Jan 4 14:25:48.049: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 4 14:25:58.109: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 4 14:25:58.109: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 4 14:25:58.109: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 4 14:25:58.129: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 14:25:58.129: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC }] Jan 4 14:25:58.129: INFO: ss-1 jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:25:58.129: INFO: ss-2 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:25:58.129: INFO: Jan 4 14:25:58.129: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 4 14:25:59.885: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 14:25:59.885: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC }] Jan 4 14:25:59.885: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:25:59.885: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:25:59.885: INFO: Jan 4 14:25:59.885: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 4 14:26:00.895: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 14:26:00.896: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC }] Jan 4 14:26:00.896: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:26:00.896: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:26:00.896: INFO: Jan 4 14:26:00.896: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 4 14:26:02.050: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 14:26:02.050: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC }] Jan 4 14:26:02.050: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:26:02.050: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:26:02.050: INFO: Jan 4 14:26:02.050: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 4 14:26:03.058: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 14:26:03.058: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC }] Jan 4 14:26:03.058: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:26:03.058: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:26:03.058: INFO: Jan 4 14:26:03.058: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 4 14:26:04.087: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 14:26:04.087: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC }] Jan 4 14:26:04.087: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:26:04.087: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:26:04.088: INFO: Jan 4 14:26:04.088: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 4 14:26:05.097: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 14:26:05.098: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC }] Jan 4 14:26:05.098: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:26:05.098: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:26:05.098: INFO: Jan 4 14:26:05.098: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 4 14:26:06.106: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 14:26:06.106: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC }] Jan 4 14:26:06.106: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:26:06.107: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:26:06.107: INFO: Jan 4 14:26:06.107: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 4 14:26:07.110: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 14:26:07.111: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC }] Jan 4 14:26:07.111: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:26:07.111: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:26:07.111: INFO: Jan 4 14:26:07.111: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 4 14:26:08.116: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 14:26:08.116: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:14 +0000 UTC }] Jan 4 14:26:08.116: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:26:08.116: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:34 +0000 UTC }] Jan 4 14:26:08.117: INFO: Jan 4 14:26:08.117: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2322 Jan 4 14:26:09.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:26:09.443: INFO: rc: 1 Jan 4 14:26:09.443: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 4 14:26:19.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:26:19.570: INFO: rc: 1 Jan 4 14:26:19.570: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:26:29.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:26:29.703: INFO: rc: 1 Jan 4 14:26:29.704: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:26:39.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:26:39.889: INFO: rc: 1 Jan 4 14:26:39.890: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:26:49.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:26:50.097: INFO: rc: 1 Jan 4 14:26:50.097: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:27:00.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:27:00.300: INFO: rc: 1 Jan 4 14:27:00.300: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:27:10.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:27:10.482: INFO: rc: 1 Jan 4 14:27:10.482: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:27:20.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:27:20.655: INFO: rc: 1 Jan 4 14:27:20.655: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:27:30.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:27:30.829: INFO: rc: 1 Jan 4 14:27:30.829: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:27:40.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:27:41.002: INFO: rc: 1 Jan 4 14:27:41.002: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:27:51.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:27:51.198: INFO: rc: 1 Jan 4 14:27:51.199: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:28:01.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:28:01.390: INFO: rc: 1 Jan 4 14:28:01.390: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:28:11.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:28:11.544: INFO: rc: 1 Jan 4 14:28:11.544: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:28:21.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:28:21.681: INFO: rc: 1 Jan 4 14:28:21.682: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:28:31.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:28:31.883: INFO: rc: 1 Jan 4 14:28:31.884: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:28:41.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:28:42.092: INFO: rc: 1 Jan 4 14:28:42.092: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:28:52.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:28:52.245: INFO: rc: 1 Jan 4 14:28:52.246: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:29:02.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:29:02.334: INFO: rc: 1 Jan 4 14:29:02.334: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:29:12.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:29:12.466: INFO: rc: 1 Jan 4 14:29:12.467: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:29:22.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:29:22.663: INFO: rc: 1 Jan 4 14:29:22.664: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:29:32.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:29:32.809: INFO: rc: 1 Jan 4 14:29:32.809: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:29:42.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:29:42.955: INFO: rc: 1 Jan 4 14:29:42.955: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:29:52.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:29:53.070: INFO: rc: 1 Jan 4 14:29:53.071: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:30:03.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:30:03.239: INFO: rc: 1 Jan 4 14:30:03.239: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:30:13.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:30:13.482: INFO: rc: 1 Jan 4 14:30:13.482: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:30:23.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:30:23.694: INFO: rc: 1 Jan 4 14:30:23.696: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:30:33.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:30:33.870: INFO: rc: 1 Jan 4 14:30:33.871: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:30:43.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:30:44.069: INFO: rc: 1 Jan 4 14:30:44.069: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:30:54.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:30:54.143: INFO: rc: 1 Jan 4 14:30:54.143: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:31:04.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:31:04.263: INFO: rc: 1 Jan 4 14:31:04.263: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 4 14:31:14.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2322 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:31:14.469: INFO: rc: 1 Jan 4 14:31:14.469: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Jan 4 14:31:14.469: INFO: Scaling statefulset ss to 0 Jan 4 14:31:14.481: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 4 14:31:14.483: INFO: Deleting all statefulset in ns statefulset-2322 Jan 4 14:31:14.485: INFO: Scaling statefulset ss to 0 Jan 4 14:31:14.500: INFO: Waiting for statefulset status.replicas updated to 0 Jan 4 14:31:14.502: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:31:14.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2322" for this suite. • [SLOW TEST:360.592 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":89,"skipped":1563,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:31:14.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 4 14:31:14.716: INFO: Waiting up to 5m0s for pod "pod-84613c97-7e43-424b-8f02-05f592b974c5" in namespace "emptydir-6385" to be "success or failure" Jan 4 14:31:14.742: INFO: Pod "pod-84613c97-7e43-424b-8f02-05f592b974c5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.673065ms Jan 4 14:31:16.749: INFO: Pod "pod-84613c97-7e43-424b-8f02-05f592b974c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033745175s Jan 4 14:31:18.761: INFO: Pod "pod-84613c97-7e43-424b-8f02-05f592b974c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044869144s Jan 4 14:31:20.770: INFO: Pod "pod-84613c97-7e43-424b-8f02-05f592b974c5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054471606s Jan 4 14:31:22.778: INFO: Pod "pod-84613c97-7e43-424b-8f02-05f592b974c5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062593009s Jan 4 14:31:24.783: INFO: Pod "pod-84613c97-7e43-424b-8f02-05f592b974c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066981167s STEP: Saw pod success Jan 4 14:31:24.783: INFO: Pod "pod-84613c97-7e43-424b-8f02-05f592b974c5" satisfied condition "success or failure" Jan 4 14:31:24.786: INFO: Trying to get logs from node jerma-node pod pod-84613c97-7e43-424b-8f02-05f592b974c5 container test-container: STEP: delete the pod Jan 4 14:31:24.916: INFO: Waiting for pod pod-84613c97-7e43-424b-8f02-05f592b974c5 to disappear Jan 4 14:31:24.926: INFO: Pod pod-84613c97-7e43-424b-8f02-05f592b974c5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:31:24.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6385" for this suite. • [SLOW TEST:10.368 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1567,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:31:24.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jan 4 14:31:25.179: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jan 4 14:31:35.681: INFO: >>> kubeConfig: /root/.kube/config Jan 4 14:31:37.562: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:31:48.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-667" for this suite. • [SLOW TEST:23.946 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":91,"skipped":1589,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:31:48.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 4 14:31:50.074: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 4 14:31:52.082: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:31:54.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:31:56.090: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:31:58.090: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:32:00.093: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:32:02.088: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745110, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 4 14:32:05.141: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:32:05.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7210" for this suite. STEP: Destroying namespace "webhook-7210-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.880 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":92,"skipped":1593,"failed":0} S ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:32:05.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-4615 STEP: creating replication controller nodeport-test in namespace services-4615 I0104 14:32:05.972321 9 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-4615, replica count: 2 I0104 14:32:09.022879 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 14:32:12.023321 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 14:32:15.023691 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 14:32:18.023939 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 14:32:21.024299 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 4 14:32:21.024: INFO: Creating new exec pod Jan 4 14:32:32.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4615 execpodjj4fk -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jan 4 14:32:32.820: INFO: stderr: "I0104 14:32:32.292765 2386 log.go:172] (0xc000a1e000) (0xc000ab00a0) Create stream\nI0104 14:32:32.292871 2386 log.go:172] (0xc000a1e000) (0xc000ab00a0) Stream added, broadcasting: 1\nI0104 14:32:32.302462 2386 log.go:172] (0xc000a1e000) Reply frame received for 1\nI0104 14:32:32.302518 2386 log.go:172] (0xc000a1e000) (0xc000b08320) Create stream\nI0104 14:32:32.302541 2386 log.go:172] (0xc000a1e000) (0xc000b08320) Stream added, broadcasting: 3\nI0104 14:32:32.305456 2386 log.go:172] (0xc000a1e000) Reply frame received for 3\nI0104 14:32:32.305490 2386 log.go:172] (0xc000a1e000) (0xc000ab0140) Create stream\nI0104 14:32:32.305502 2386 log.go:172] (0xc000a1e000) (0xc000ab0140) Stream added, broadcasting: 5\nI0104 14:32:32.309432 2386 log.go:172] (0xc000a1e000) Reply frame received for 5\nI0104 14:32:32.574093 2386 log.go:172] (0xc000a1e000) Data frame received for 5\nI0104 14:32:32.574183 2386 log.go:172] (0xc000ab0140) (5) Data frame handling\nI0104 14:32:32.574205 2386 log.go:172] (0xc000ab0140) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0104 14:32:32.607410 2386 log.go:172] (0xc000a1e000) Data frame received for 5\nI0104 14:32:32.607508 2386 log.go:172] (0xc000ab0140) (5) Data frame handling\nI0104 14:32:32.607681 2386 log.go:172] (0xc000ab0140) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0104 14:32:32.811718 2386 log.go:172] (0xc000a1e000) Data frame received for 1\nI0104 14:32:32.811810 2386 log.go:172] (0xc000ab00a0) (1) Data frame handling\nI0104 14:32:32.811834 2386 log.go:172] (0xc000ab00a0) (1) Data frame sent\nI0104 14:32:32.812231 2386 log.go:172] (0xc000a1e000) (0xc000ab0140) Stream removed, broadcasting: 5\nI0104 14:32:32.812465 2386 log.go:172] (0xc000a1e000) (0xc000ab00a0) Stream removed, broadcasting: 1\nI0104 14:32:32.812617 2386 log.go:172] (0xc000a1e000) (0xc000b08320) Stream removed, broadcasting: 3\nI0104 14:32:32.812689 2386 log.go:172] (0xc000a1e000) Go away received\nI0104 14:32:32.813997 2386 log.go:172] (0xc000a1e000) (0xc000ab00a0) Stream removed, broadcasting: 1\nI0104 14:32:32.814012 2386 log.go:172] (0xc000a1e000) (0xc000b08320) Stream removed, broadcasting: 3\nI0104 14:32:32.814017 2386 log.go:172] (0xc000a1e000) (0xc000ab0140) Stream removed, broadcasting: 5\n" Jan 4 14:32:32.820: INFO: stdout: "" Jan 4 14:32:32.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4615 execpodjj4fk -- /bin/sh -x -c nc -zv -t -w 2 10.96.176.118 80' Jan 4 14:32:33.215: INFO: stderr: "I0104 14:32:33.009382 2402 log.go:172] (0xc000591ad0) (0xc000a34500) Create stream\nI0104 14:32:33.009634 2402 log.go:172] (0xc000591ad0) (0xc000a34500) Stream added, broadcasting: 1\nI0104 14:32:33.016388 2402 log.go:172] (0xc000591ad0) Reply frame received for 1\nI0104 14:32:33.016411 2402 log.go:172] (0xc000591ad0) (0xc000a345a0) Create stream\nI0104 14:32:33.016417 2402 log.go:172] (0xc000591ad0) (0xc000a345a0) Stream added, broadcasting: 3\nI0104 14:32:33.018656 2402 log.go:172] (0xc000591ad0) Reply frame received for 3\nI0104 14:32:33.018672 2402 log.go:172] (0xc000591ad0) (0xc00095e5a0) Create stream\nI0104 14:32:33.018682 2402 log.go:172] (0xc000591ad0) (0xc00095e5a0) Stream added, broadcasting: 5\nI0104 14:32:33.021956 2402 log.go:172] (0xc000591ad0) Reply frame received for 5\nI0104 14:32:33.109018 2402 log.go:172] (0xc000591ad0) Data frame received for 5\nI0104 14:32:33.109063 2402 log.go:172] (0xc00095e5a0) (5) Data frame handling\nI0104 14:32:33.109077 2402 log.go:172] (0xc00095e5a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.176.118 80\nI0104 14:32:33.111460 2402 log.go:172] (0xc000591ad0) Data frame received for 5\nI0104 14:32:33.111473 2402 log.go:172] (0xc00095e5a0) (5) Data frame handling\nI0104 14:32:33.111481 2402 log.go:172] (0xc00095e5a0) (5) Data frame sent\nConnection to 10.96.176.118 80 port [tcp/http] succeeded!\nI0104 14:32:33.207960 2402 log.go:172] (0xc000591ad0) Data frame received for 1\nI0104 14:32:33.208039 2402 log.go:172] (0xc000a34500) (1) Data frame handling\nI0104 14:32:33.208052 2402 log.go:172] (0xc000a34500) (1) Data frame sent\nI0104 14:32:33.208111 2402 log.go:172] (0xc000591ad0) (0xc000a34500) Stream removed, broadcasting: 1\nI0104 14:32:33.208500 2402 log.go:172] (0xc000591ad0) (0xc000a345a0) Stream removed, broadcasting: 3\nI0104 14:32:33.208709 2402 log.go:172] (0xc000591ad0) (0xc00095e5a0) Stream removed, broadcasting: 5\nI0104 14:32:33.208728 2402 log.go:172] (0xc000591ad0) (0xc000a34500) Stream removed, broadcasting: 1\nI0104 14:32:33.208732 2402 log.go:172] (0xc000591ad0) (0xc000a345a0) Stream removed, broadcasting: 3\nI0104 14:32:33.208736 2402 log.go:172] (0xc000591ad0) (0xc00095e5a0) Stream removed, broadcasting: 5\n" Jan 4 14:32:33.215: INFO: stdout: "" Jan 4 14:32:33.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4615 execpodjj4fk -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32565' Jan 4 14:32:33.587: INFO: stderr: "I0104 14:32:33.354067 2417 log.go:172] (0xc00090ab00) (0xc00092c320) Create stream\nI0104 14:32:33.354353 2417 log.go:172] (0xc00090ab00) (0xc00092c320) Stream added, broadcasting: 1\nI0104 14:32:33.390449 2417 log.go:172] (0xc00090ab00) Reply frame received for 1\nI0104 14:32:33.390481 2417 log.go:172] (0xc00090ab00) (0xc000680780) Create stream\nI0104 14:32:33.390488 2417 log.go:172] (0xc00090ab00) (0xc000680780) Stream added, broadcasting: 3\nI0104 14:32:33.392203 2417 log.go:172] (0xc00090ab00) Reply frame received for 3\nI0104 14:32:33.392240 2417 log.go:172] (0xc00090ab00) (0xc0004eb540) Create stream\nI0104 14:32:33.392251 2417 log.go:172] (0xc00090ab00) (0xc0004eb540) Stream added, broadcasting: 5\nI0104 14:32:33.394945 2417 log.go:172] (0xc00090ab00) Reply frame received for 5\nI0104 14:32:33.495541 2417 log.go:172] (0xc00090ab00) Data frame received for 5\nI0104 14:32:33.495599 2417 log.go:172] (0xc0004eb540) (5) Data frame handling\nI0104 14:32:33.495610 2417 log.go:172] (0xc0004eb540) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 32565\nConnection to 10.96.2.250 32565 port [tcp/32565] succeeded!\nI0104 14:32:33.580713 2417 log.go:172] (0xc00090ab00) (0xc000680780) Stream removed, broadcasting: 3\nI0104 14:32:33.580963 2417 log.go:172] (0xc00090ab00) Data frame received for 1\nI0104 14:32:33.581038 2417 log.go:172] (0xc00092c320) (1) Data frame handling\nI0104 14:32:33.581080 2417 log.go:172] (0xc00090ab00) (0xc0004eb540) Stream removed, broadcasting: 5\nI0104 14:32:33.581112 2417 log.go:172] (0xc00092c320) (1) Data frame sent\nI0104 14:32:33.581123 2417 log.go:172] (0xc00090ab00) (0xc00092c320) Stream removed, broadcasting: 1\nI0104 14:32:33.581150 2417 log.go:172] (0xc00090ab00) Go away received\nI0104 14:32:33.581496 2417 log.go:172] (0xc00090ab00) (0xc00092c320) Stream removed, broadcasting: 1\nI0104 14:32:33.581508 2417 log.go:172] (0xc00090ab00) (0xc000680780) Stream removed, broadcasting: 3\nI0104 14:32:33.581516 2417 log.go:172] (0xc00090ab00) (0xc0004eb540) Stream removed, broadcasting: 5\n" Jan 4 14:32:33.587: INFO: stdout: "" Jan 4 14:32:33.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4615 execpodjj4fk -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32565' Jan 4 14:32:34.032: INFO: stderr: "I0104 14:32:33.730272 2437 log.go:172] (0xc0006ee630) (0xc0007460a0) Create stream\nI0104 14:32:33.730480 2437 log.go:172] (0xc0006ee630) (0xc0007460a0) Stream added, broadcasting: 1\nI0104 14:32:33.738087 2437 log.go:172] (0xc0006ee630) Reply frame received for 1\nI0104 14:32:33.738114 2437 log.go:172] (0xc0006ee630) (0xc00088e000) Create stream\nI0104 14:32:33.738122 2437 log.go:172] (0xc0006ee630) (0xc00088e000) Stream added, broadcasting: 3\nI0104 14:32:33.739919 2437 log.go:172] (0xc0006ee630) Reply frame received for 3\nI0104 14:32:33.739940 2437 log.go:172] (0xc0006ee630) (0xc00088e0a0) Create stream\nI0104 14:32:33.739949 2437 log.go:172] (0xc0006ee630) (0xc00088e0a0) Stream added, broadcasting: 5\nI0104 14:32:33.741062 2437 log.go:172] (0xc0006ee630) Reply frame received for 5\nI0104 14:32:33.905619 2437 log.go:172] (0xc0006ee630) Data frame received for 5\nI0104 14:32:33.905702 2437 log.go:172] (0xc00088e0a0) (5) Data frame handling\nI0104 14:32:33.905729 2437 log.go:172] (0xc00088e0a0) (5) Data frame sent\nI0104 14:32:33.905741 2437 log.go:172] (0xc0006ee630) Data frame received for 5\nI0104 14:32:33.905769 2437 log.go:172] (0xc00088e0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.1.234 32565\nConnection to 10.96.1.234 32565 port [tcp/32565] succeeded!\nI0104 14:32:33.905866 2437 log.go:172] (0xc00088e0a0) (5) Data frame sent\nI0104 14:32:34.024009 2437 log.go:172] (0xc0006ee630) Data frame received for 1\nI0104 14:32:34.024186 2437 log.go:172] (0xc0006ee630) (0xc00088e000) Stream removed, broadcasting: 3\nI0104 14:32:34.024292 2437 log.go:172] (0xc0007460a0) (1) Data frame handling\nI0104 14:32:34.024321 2437 log.go:172] (0xc0007460a0) (1) Data frame sent\nI0104 14:32:34.024339 2437 log.go:172] (0xc0006ee630) (0xc00088e0a0) Stream removed, broadcasting: 5\nI0104 14:32:34.024379 2437 log.go:172] (0xc0006ee630) (0xc0007460a0) Stream removed, broadcasting: 1\nI0104 14:32:34.024406 2437 log.go:172] (0xc0006ee630) Go away received\nI0104 14:32:34.024744 2437 log.go:172] (0xc0006ee630) (0xc0007460a0) Stream removed, broadcasting: 1\nI0104 14:32:34.024765 2437 log.go:172] (0xc0006ee630) (0xc00088e000) Stream removed, broadcasting: 3\nI0104 14:32:34.024773 2437 log.go:172] (0xc0006ee630) (0xc00088e0a0) Stream removed, broadcasting: 5\n" Jan 4 14:32:34.032: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:32:34.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4615" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:28.274 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":93,"skipped":1594,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:32:34.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Jan 4 14:32:34.810: INFO: created pod pod-service-account-defaultsa Jan 4 14:32:34.810: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 4 14:32:34.832: INFO: created pod pod-service-account-mountsa Jan 4 14:32:34.832: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 4 14:32:34.911: INFO: created pod pod-service-account-nomountsa Jan 4 14:32:34.911: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 4 14:32:34.927: INFO: created pod pod-service-account-defaultsa-mountspec Jan 4 14:32:34.927: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 4 14:32:34.966: INFO: created pod pod-service-account-mountsa-mountspec Jan 4 14:32:34.966: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 4 14:32:35.130: INFO: created pod pod-service-account-nomountsa-mountspec Jan 4 14:32:35.130: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 4 14:32:35.138: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 4 14:32:35.138: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 4 14:32:35.149: INFO: created pod pod-service-account-mountsa-nomountspec Jan 4 14:32:35.149: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 4 14:32:35.387: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 4 14:32:35.387: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:32:35.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2117" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":94,"skipped":1630,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:32:35.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Jan 4 14:32:37.297: INFO: Waiting up to 5m0s for pod "pod-a9bc9631-d9c0-4144-be05-063be1aed6ff" in namespace "emptydir-1392" to be "success or failure" Jan 4 14:32:37.798: INFO: Pod "pod-a9bc9631-d9c0-4144-be05-063be1aed6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 501.62249ms Jan 4 14:32:39.867: INFO: Pod "pod-a9bc9631-d9c0-4144-be05-063be1aed6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.56998112s Jan 4 14:32:44.227: INFO: Pod "pod-a9bc9631-d9c0-4144-be05-063be1aed6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.929985811s Jan 4 14:32:47.570: INFO: Pod "pod-a9bc9631-d9c0-4144-be05-063be1aed6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 10.273475848s Jan 4 14:32:49.754: INFO: Pod "pod-a9bc9631-d9c0-4144-be05-063be1aed6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 12.457239884s Jan 4 14:32:51.974: INFO: Pod "pod-a9bc9631-d9c0-4144-be05-063be1aed6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 14.677542654s Jan 4 14:32:54.444: INFO: Pod "pod-a9bc9631-d9c0-4144-be05-063be1aed6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 17.146762396s Jan 4 14:32:57.776: INFO: Pod "pod-a9bc9631-d9c0-4144-be05-063be1aed6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 20.479403616s Jan 4 14:33:00.025: INFO: Pod "pod-a9bc9631-d9c0-4144-be05-063be1aed6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 22.728547765s Jan 4 14:33:02.601: INFO: Pod "pod-a9bc9631-d9c0-4144-be05-063be1aed6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 25.304639342s Jan 4 14:33:04.609: INFO: Pod "pod-a9bc9631-d9c0-4144-be05-063be1aed6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 27.312152172s Jan 4 14:33:06.616: INFO: Pod "pod-a9bc9631-d9c0-4144-be05-063be1aed6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 29.318946184s Jan 4 14:33:08.621: INFO: Pod "pod-a9bc9631-d9c0-4144-be05-063be1aed6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 31.324061796s Jan 4 14:33:10.625: INFO: Pod "pod-a9bc9631-d9c0-4144-be05-063be1aed6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 33.328032237s Jan 4 14:33:12.631: INFO: Pod "pod-a9bc9631-d9c0-4144-be05-063be1aed6ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.334508683s STEP: Saw pod success Jan 4 14:33:12.632: INFO: Pod "pod-a9bc9631-d9c0-4144-be05-063be1aed6ff" satisfied condition "success or failure" Jan 4 14:33:12.634: INFO: Trying to get logs from node jerma-node pod pod-a9bc9631-d9c0-4144-be05-063be1aed6ff container test-container: STEP: delete the pod Jan 4 14:33:12.707: INFO: Waiting for pod pod-a9bc9631-d9c0-4144-be05-063be1aed6ff to disappear Jan 4 14:33:12.713: INFO: Pod pod-a9bc9631-d9c0-4144-be05-063be1aed6ff no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:33:12.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1392" for this suite. • [SLOW TEST:37.283 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1642,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:33:12.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-d2960a92-e3eb-4879-852d-a8eaabc30f7c STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-d2960a92-e3eb-4879-852d-a8eaabc30f7c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:33:23.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-720" for this suite. • [SLOW TEST:10.339 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1669,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:33:23.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 4 14:33:34.029: INFO: Successfully updated pod "pod-update-activedeadlineseconds-1f2a1ea5-79fc-4a79-98dc-8673aa89f673" Jan 4 14:33:34.030: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-1f2a1ea5-79fc-4a79-98dc-8673aa89f673" in namespace "pods-6345" to be "terminated due to deadline exceeded" Jan 4 14:33:34.038: INFO: Pod "pod-update-activedeadlineseconds-1f2a1ea5-79fc-4a79-98dc-8673aa89f673": Phase="Running", Reason="", readiness=true. Elapsed: 8.393564ms Jan 4 14:33:36.044: INFO: Pod "pod-update-activedeadlineseconds-1f2a1ea5-79fc-4a79-98dc-8673aa89f673": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.014541521s Jan 4 14:33:36.044: INFO: Pod "pod-update-activedeadlineseconds-1f2a1ea5-79fc-4a79-98dc-8673aa89f673" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:33:36.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6345" for this suite. • [SLOW TEST:12.995 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1675,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:33:36.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-3cafd3ed-95e4-4d8b-acc9-cf7ebb8ea655 STEP: Creating a pod to test consume configMaps Jan 4 14:33:36.340: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9912bda0-9f9b-4c94-9efa-3c4433685e80" in namespace "projected-6777" to be "success or failure" Jan 4 14:33:36.411: INFO: Pod "pod-projected-configmaps-9912bda0-9f9b-4c94-9efa-3c4433685e80": Phase="Pending", Reason="", readiness=false. Elapsed: 70.640266ms Jan 4 14:33:38.416: INFO: Pod "pod-projected-configmaps-9912bda0-9f9b-4c94-9efa-3c4433685e80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076225438s Jan 4 14:33:40.423: INFO: Pod "pod-projected-configmaps-9912bda0-9f9b-4c94-9efa-3c4433685e80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082613146s Jan 4 14:33:42.463: INFO: Pod "pod-projected-configmaps-9912bda0-9f9b-4c94-9efa-3c4433685e80": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122591504s Jan 4 14:33:44.468: INFO: Pod "pod-projected-configmaps-9912bda0-9f9b-4c94-9efa-3c4433685e80": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128077782s Jan 4 14:33:46.475: INFO: Pod "pod-projected-configmaps-9912bda0-9f9b-4c94-9efa-3c4433685e80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.134958007s STEP: Saw pod success Jan 4 14:33:46.475: INFO: Pod "pod-projected-configmaps-9912bda0-9f9b-4c94-9efa-3c4433685e80" satisfied condition "success or failure" Jan 4 14:33:46.479: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-9912bda0-9f9b-4c94-9efa-3c4433685e80 container projected-configmap-volume-test: STEP: delete the pod Jan 4 14:33:46.568: INFO: Waiting for pod pod-projected-configmaps-9912bda0-9f9b-4c94-9efa-3c4433685e80 to disappear Jan 4 14:33:46.602: INFO: Pod pod-projected-configmaps-9912bda0-9f9b-4c94-9efa-3c4433685e80 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:33:46.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6777" for this suite. • [SLOW TEST:10.555 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1699,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:33:46.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 4 14:33:46.842: INFO: Number of nodes with available pods: 0 Jan 4 14:33:46.842: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:33:47.857: INFO: Number of nodes with available pods: 0 Jan 4 14:33:47.857: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:33:48.928: INFO: Number of nodes with available pods: 0 Jan 4 14:33:48.928: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:33:49.858: INFO: Number of nodes with available pods: 0 Jan 4 14:33:49.858: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:33:51.555: INFO: Number of nodes with available pods: 0 Jan 4 14:33:51.555: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:33:51.966: INFO: Number of nodes with available pods: 0 Jan 4 14:33:51.966: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:33:53.087: INFO: Number of nodes with available pods: 0 Jan 4 14:33:53.087: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:33:53.855: INFO: Number of nodes with available pods: 1 Jan 4 14:33:53.855: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:33:54.859: INFO: Number of nodes with available pods: 1 Jan 4 14:33:54.860: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:33:55.863: INFO: Number of nodes with available pods: 1 Jan 4 14:33:55.863: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:33:56.856: INFO: Number of nodes with available pods: 2 Jan 4 14:33:56.856: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 4 14:33:56.896: INFO: Number of nodes with available pods: 1 Jan 4 14:33:56.897: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:33:57.907: INFO: Number of nodes with available pods: 1 Jan 4 14:33:57.907: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:33:58.915: INFO: Number of nodes with available pods: 1 Jan 4 14:33:58.915: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:33:59.953: INFO: Number of nodes with available pods: 1 Jan 4 14:33:59.953: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:00.904: INFO: Number of nodes with available pods: 1 Jan 4 14:34:00.904: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:01.919: INFO: Number of nodes with available pods: 1 Jan 4 14:34:01.919: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:02.910: INFO: Number of nodes with available pods: 1 Jan 4 14:34:02.910: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:03.914: INFO: Number of nodes with available pods: 1 Jan 4 14:34:03.914: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:04.910: INFO: Number of nodes with available pods: 1 Jan 4 14:34:04.910: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:05.911: INFO: Number of nodes with available pods: 1 Jan 4 14:34:05.911: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:06.903: INFO: Number of nodes with available pods: 1 Jan 4 14:34:06.903: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:07.903: INFO: Number of nodes with available pods: 1 Jan 4 14:34:07.903: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:08.936: INFO: Number of nodes with available pods: 1 Jan 4 14:34:08.936: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:09.910: INFO: Number of nodes with available pods: 1 Jan 4 14:34:09.910: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:10.907: INFO: Number of nodes with available pods: 1 Jan 4 14:34:10.907: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:11.906: INFO: Number of nodes with available pods: 1 Jan 4 14:34:11.906: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:12.915: INFO: Number of nodes with available pods: 1 Jan 4 14:34:12.915: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:13.904: INFO: Number of nodes with available pods: 1 Jan 4 14:34:13.904: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:14.908: INFO: Number of nodes with available pods: 1 Jan 4 14:34:14.908: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:15.915: INFO: Number of nodes with available pods: 1 Jan 4 14:34:15.915: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:16.913: INFO: Number of nodes with available pods: 1 Jan 4 14:34:16.913: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:17.920: INFO: Number of nodes with available pods: 1 Jan 4 14:34:17.920: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:18.905: INFO: Number of nodes with available pods: 1 Jan 4 14:34:18.905: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:19.942: INFO: Number of nodes with available pods: 1 Jan 4 14:34:19.942: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:20.913: INFO: Number of nodes with available pods: 1 Jan 4 14:34:20.913: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:21.909: INFO: Number of nodes with available pods: 1 Jan 4 14:34:21.909: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:34:22.908: INFO: Number of nodes with available pods: 2 Jan 4 14:34:22.909: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1339, will wait for the garbage collector to delete the pods Jan 4 14:34:22.972: INFO: Deleting DaemonSet.extensions daemon-set took: 6.705553ms Jan 4 14:34:23.373: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.341416ms Jan 4 14:34:33.231: INFO: Number of nodes with available pods: 0 Jan 4 14:34:33.232: INFO: Number of running nodes: 0, number of available pods: 0 Jan 4 14:34:33.238: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1339/daemonsets","resourceVersion":"31688"},"items":null} Jan 4 14:34:33.246: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1339/pods","resourceVersion":"31688"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:34:33.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1339" for this suite. • [SLOW TEST:46.654 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":99,"skipped":1701,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:34:33.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:34:33.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1035" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":100,"skipped":1707,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:34:33.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:34:33.868: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:34:43.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2133" for this suite. • [SLOW TEST:10.287 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1727,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:34:43.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:34:44.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4967" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":102,"skipped":1754,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:34:44.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:35:00.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4516" for this suite. • [SLOW TEST:16.483 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":103,"skipped":1758,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:35:00.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Jan 4 14:35:00.627: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:35:00.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9698" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":104,"skipped":1774,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:35:01.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:35:13.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3280" for this suite. • [SLOW TEST:11.292 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":105,"skipped":1776,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:35:13.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 4 14:35:13.385: INFO: Waiting up to 5m0s for pod "pod-42feb7d0-d7e0-4226-abe5-56ca61e9bb89" in namespace "emptydir-5537" to be "success or failure" Jan 4 14:35:13.421: INFO: Pod "pod-42feb7d0-d7e0-4226-abe5-56ca61e9bb89": Phase="Pending", Reason="", readiness=false. Elapsed: 35.325507ms Jan 4 14:35:15.429: INFO: Pod "pod-42feb7d0-d7e0-4226-abe5-56ca61e9bb89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043734861s Jan 4 14:35:17.434: INFO: Pod "pod-42feb7d0-d7e0-4226-abe5-56ca61e9bb89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048293718s Jan 4 14:35:19.438: INFO: Pod "pod-42feb7d0-d7e0-4226-abe5-56ca61e9bb89": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052510614s Jan 4 14:35:21.443: INFO: Pod "pod-42feb7d0-d7e0-4226-abe5-56ca61e9bb89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057355673s STEP: Saw pod success Jan 4 14:35:21.443: INFO: Pod "pod-42feb7d0-d7e0-4226-abe5-56ca61e9bb89" satisfied condition "success or failure" Jan 4 14:35:21.446: INFO: Trying to get logs from node jerma-node pod pod-42feb7d0-d7e0-4226-abe5-56ca61e9bb89 container test-container: STEP: delete the pod Jan 4 14:35:21.492: INFO: Waiting for pod pod-42feb7d0-d7e0-4226-abe5-56ca61e9bb89 to disappear Jan 4 14:35:21.498: INFO: Pod pod-42feb7d0-d7e0-4226-abe5-56ca61e9bb89 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:35:21.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5537" for this suite. • [SLOW TEST:8.508 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1790,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:35:21.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Jan 4 14:35:21.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4389' Jan 4 14:35:25.102: INFO: stderr: "" Jan 4 14:35:25.102: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 4 14:35:25.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4389' Jan 4 14:35:25.283: INFO: stderr: "" Jan 4 14:35:25.283: INFO: stdout: "update-demo-nautilus-f7gkx update-demo-nautilus-kxm6n " Jan 4 14:35:25.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f7gkx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4389' Jan 4 14:35:25.446: INFO: stderr: "" Jan 4 14:35:25.446: INFO: stdout: "" Jan 4 14:35:25.446: INFO: update-demo-nautilus-f7gkx is created but not running Jan 4 14:35:30.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4389' Jan 4 14:35:30.636: INFO: stderr: "" Jan 4 14:35:30.636: INFO: stdout: "update-demo-nautilus-f7gkx update-demo-nautilus-kxm6n " Jan 4 14:35:30.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f7gkx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4389' Jan 4 14:35:30.735: INFO: stderr: "" Jan 4 14:35:30.735: INFO: stdout: "" Jan 4 14:35:30.735: INFO: update-demo-nautilus-f7gkx is created but not running Jan 4 14:35:35.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4389' Jan 4 14:35:35.896: INFO: stderr: "" Jan 4 14:35:35.896: INFO: stdout: "update-demo-nautilus-f7gkx update-demo-nautilus-kxm6n " Jan 4 14:35:35.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f7gkx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4389' Jan 4 14:35:36.005: INFO: stderr: "" Jan 4 14:35:36.005: INFO: stdout: "true" Jan 4 14:35:36.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f7gkx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4389' Jan 4 14:35:36.098: INFO: stderr: "" Jan 4 14:35:36.098: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 4 14:35:36.098: INFO: validating pod update-demo-nautilus-f7gkx Jan 4 14:35:36.110: INFO: got data: { "image": "nautilus.jpg" } Jan 4 14:35:36.111: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 4 14:35:36.111: INFO: update-demo-nautilus-f7gkx is verified up and running Jan 4 14:35:36.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxm6n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4389' Jan 4 14:35:36.188: INFO: stderr: "" Jan 4 14:35:36.188: INFO: stdout: "true" Jan 4 14:35:36.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxm6n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4389' Jan 4 14:35:36.289: INFO: stderr: "" Jan 4 14:35:36.289: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 4 14:35:36.289: INFO: validating pod update-demo-nautilus-kxm6n Jan 4 14:35:36.295: INFO: got data: { "image": "nautilus.jpg" } Jan 4 14:35:36.295: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 4 14:35:36.295: INFO: update-demo-nautilus-kxm6n is verified up and running STEP: rolling-update to new replication controller Jan 4 14:35:36.297: INFO: scanned /root for discovery docs: Jan 4 14:35:36.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4389' Jan 4 14:36:13.069: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 4 14:36:13.070: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 4 14:36:13.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4389' Jan 4 14:36:13.262: INFO: stderr: "" Jan 4 14:36:13.262: INFO: stdout: "update-demo-kitten-sjftt update-demo-kitten-wbxbr " Jan 4 14:36:13.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sjftt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4389' Jan 4 14:36:13.358: INFO: stderr: "" Jan 4 14:36:13.358: INFO: stdout: "true" Jan 4 14:36:13.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sjftt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4389' Jan 4 14:36:13.436: INFO: stderr: "" Jan 4 14:36:13.436: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 4 14:36:13.436: INFO: validating pod update-demo-kitten-sjftt Jan 4 14:36:13.459: INFO: got data: { "image": "kitten.jpg" } Jan 4 14:36:13.459: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 4 14:36:13.460: INFO: update-demo-kitten-sjftt is verified up and running Jan 4 14:36:13.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wbxbr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4389' Jan 4 14:36:13.526: INFO: stderr: "" Jan 4 14:36:13.526: INFO: stdout: "true" Jan 4 14:36:13.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wbxbr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4389' Jan 4 14:36:13.689: INFO: stderr: "" Jan 4 14:36:13.689: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 4 14:36:13.689: INFO: validating pod update-demo-kitten-wbxbr Jan 4 14:36:13.694: INFO: got data: { "image": "kitten.jpg" } Jan 4 14:36:13.694: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 4 14:36:13.694: INFO: update-demo-kitten-wbxbr is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:36:13.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4389" for this suite. • [SLOW TEST:52.126 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":107,"skipped":1792,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:36:13.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:36:13.810: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 4 14:36:16.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2334 create -f -' Jan 4 14:36:20.177: INFO: stderr: "" Jan 4 14:36:20.178: INFO: stdout: "e2e-test-crd-publish-openapi-4991-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 4 14:36:20.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2334 delete e2e-test-crd-publish-openapi-4991-crds test-cr' Jan 4 14:36:20.467: INFO: stderr: "" Jan 4 14:36:20.467: INFO: stdout: "e2e-test-crd-publish-openapi-4991-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jan 4 14:36:20.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2334 apply -f -' Jan 4 14:36:20.849: INFO: stderr: "" Jan 4 14:36:20.849: INFO: stdout: "e2e-test-crd-publish-openapi-4991-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 4 14:36:20.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2334 delete e2e-test-crd-publish-openapi-4991-crds test-cr' Jan 4 14:36:21.185: INFO: stderr: "" Jan 4 14:36:21.185: INFO: stdout: "e2e-test-crd-publish-openapi-4991-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jan 4 14:36:21.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4991-crds' Jan 4 14:36:21.565: INFO: stderr: "" Jan 4 14:36:21.565: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4991-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:36:24.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2334" for this suite. • [SLOW TEST:11.233 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":108,"skipped":1820,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:36:24.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Jan 4 14:36:24.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3876 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 4 14:36:33.180: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0104 14:36:31.903791 2906 log.go:172] (0xc0008f6210) (0xc000677b80) Create stream\nI0104 14:36:31.903993 2906 log.go:172] (0xc0008f6210) (0xc000677b80) Stream added, broadcasting: 1\nI0104 14:36:31.912228 2906 log.go:172] (0xc0008f6210) Reply frame received for 1\nI0104 14:36:31.912305 2906 log.go:172] (0xc0008f6210) (0xc000677c20) Create stream\nI0104 14:36:31.912325 2906 log.go:172] (0xc0008f6210) (0xc000677c20) Stream added, broadcasting: 3\nI0104 14:36:31.914173 2906 log.go:172] (0xc0008f6210) Reply frame received for 3\nI0104 14:36:31.914239 2906 log.go:172] (0xc0008f6210) (0xc000796000) Create stream\nI0104 14:36:31.914252 2906 log.go:172] (0xc0008f6210) (0xc000796000) Stream added, broadcasting: 5\nI0104 14:36:31.915908 2906 log.go:172] (0xc0008f6210) Reply frame received for 5\nI0104 14:36:31.915930 2906 log.go:172] (0xc0008f6210) (0xc000677cc0) Create stream\nI0104 14:36:31.915938 2906 log.go:172] (0xc0008f6210) (0xc000677cc0) Stream added, broadcasting: 7\nI0104 14:36:31.917319 2906 log.go:172] (0xc0008f6210) Reply frame received for 7\nI0104 14:36:31.917529 2906 log.go:172] (0xc000677c20) (3) Writing data frame\nI0104 14:36:31.917705 2906 log.go:172] (0xc000677c20) (3) Writing data frame\nI0104 14:36:31.928444 2906 log.go:172] (0xc0008f6210) Data frame received for 5\nI0104 14:36:31.928556 2906 log.go:172] (0xc000796000) (5) Data frame handling\nI0104 14:36:31.928658 2906 log.go:172] (0xc000796000) (5) Data frame sent\nI0104 14:36:31.930141 2906 log.go:172] (0xc0008f6210) Data frame received for 5\nI0104 14:36:31.930177 2906 log.go:172] (0xc000796000) (5) Data frame handling\nI0104 14:36:31.930202 2906 log.go:172] (0xc000796000) (5) Data frame sent\nI0104 14:36:33.079807 2906 log.go:172] (0xc0008f6210) Data frame received for 1\nI0104 14:36:33.080805 2906 log.go:172] (0xc000677b80) (1) Data frame handling\nI0104 14:36:33.080886 2906 log.go:172] (0xc000677b80) (1) Data frame sent\nI0104 14:36:33.081042 2906 log.go:172] (0xc0008f6210) (0xc000677cc0) Stream removed, broadcasting: 7\nI0104 14:36:33.081384 2906 log.go:172] (0xc0008f6210) (0xc000677b80) Stream removed, broadcasting: 1\nI0104 14:36:33.082378 2906 log.go:172] (0xc0008f6210) (0xc000796000) Stream removed, broadcasting: 5\nI0104 14:36:33.082534 2906 log.go:172] (0xc0008f6210) (0xc000677c20) Stream removed, broadcasting: 3\nI0104 14:36:33.082736 2906 log.go:172] (0xc0008f6210) Go away received\nI0104 14:36:33.082858 2906 log.go:172] (0xc0008f6210) (0xc000677b80) Stream removed, broadcasting: 1\nI0104 14:36:33.082915 2906 log.go:172] (0xc0008f6210) (0xc000677c20) Stream removed, broadcasting: 3\nI0104 14:36:33.082948 2906 log.go:172] (0xc0008f6210) (0xc000796000) Stream removed, broadcasting: 5\nI0104 14:36:33.083002 2906 log.go:172] (0xc0008f6210) (0xc000677cc0) Stream removed, broadcasting: 7\n" Jan 4 14:36:33.181: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:36:35.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3876" for this suite. • [SLOW TEST:10.260 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1924 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":109,"skipped":1831,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:36:35.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:37:35.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3776" for this suite. • [SLOW TEST:60.178 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1848,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:37:35.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-d885d456-9e01-4e96-863a-82a32d24d3dc STEP: Creating a pod to test consume secrets Jan 4 14:37:35.536: INFO: Waiting up to 5m0s for pod "pod-secrets-ebf5afc0-dda4-4d9c-8e0a-1fac154bf700" in namespace "secrets-1364" to be "success or failure" Jan 4 14:37:35.549: INFO: Pod "pod-secrets-ebf5afc0-dda4-4d9c-8e0a-1fac154bf700": Phase="Pending", Reason="", readiness=false. Elapsed: 12.212484ms Jan 4 14:37:37.558: INFO: Pod "pod-secrets-ebf5afc0-dda4-4d9c-8e0a-1fac154bf700": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02143087s Jan 4 14:37:39.562: INFO: Pod "pod-secrets-ebf5afc0-dda4-4d9c-8e0a-1fac154bf700": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025504191s Jan 4 14:37:41.572: INFO: Pod "pod-secrets-ebf5afc0-dda4-4d9c-8e0a-1fac154bf700": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035462975s Jan 4 14:37:43.580: INFO: Pod "pod-secrets-ebf5afc0-dda4-4d9c-8e0a-1fac154bf700": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042971125s Jan 4 14:37:45.586: INFO: Pod "pod-secrets-ebf5afc0-dda4-4d9c-8e0a-1fac154bf700": Phase="Pending", Reason="", readiness=false. Elapsed: 10.049143043s Jan 4 14:37:47.594: INFO: Pod "pod-secrets-ebf5afc0-dda4-4d9c-8e0a-1fac154bf700": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.057495864s STEP: Saw pod success Jan 4 14:37:47.595: INFO: Pod "pod-secrets-ebf5afc0-dda4-4d9c-8e0a-1fac154bf700" satisfied condition "success or failure" Jan 4 14:37:47.598: INFO: Trying to get logs from node jerma-node pod pod-secrets-ebf5afc0-dda4-4d9c-8e0a-1fac154bf700 container secret-volume-test: STEP: delete the pod Jan 4 14:37:47.756: INFO: Waiting for pod pod-secrets-ebf5afc0-dda4-4d9c-8e0a-1fac154bf700 to disappear Jan 4 14:37:47.761: INFO: Pod pod-secrets-ebf5afc0-dda4-4d9c-8e0a-1fac154bf700 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:37:47.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1364" for this suite. • [SLOW TEST:12.396 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1856,"failed":0} [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:37:47.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:38:09.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1131" for this suite. STEP: Destroying namespace "nsdeletetest-9487" for this suite. Jan 4 14:38:09.146: INFO: Namespace nsdeletetest-9487 was already deleted STEP: Destroying namespace "nsdeletetest-3306" for this suite. • [SLOW TEST:21.380 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":112,"skipped":1856,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:38:09.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 4 14:38:09.823: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 4 14:38:11.839: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:38:13.846: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:38:15.847: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:38:17.848: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:38:20.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745489, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 4 14:38:22.900: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration Jan 4 14:38:23.135: INFO: Waiting for webhook configuration to be ready... STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:38:23.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3347" for this suite. STEP: Destroying namespace "webhook-3347-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.420 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":113,"skipped":1866,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:38:23.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9968.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9968.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9968.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9968.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9968.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9968.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9968.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9968.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9968.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 92.216.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.216.92_udp@PTR;check="$$(dig +tcp +noall +answer +search 92.216.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.216.92_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9968.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9968.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9968.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9968.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9968.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9968.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9968.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9968.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9968.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 92.216.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.216.92_udp@PTR;check="$$(dig +tcp +noall +answer +search 92.216.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.216.92_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 4 14:38:43.866: INFO: Unable to read wheezy_udp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:43.885: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:43.896: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:43.905: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:43.944: INFO: Unable to read jessie_udp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:43.952: INFO: Unable to read jessie_tcp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:43.957: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:43.961: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:44.039: INFO: Lookups using dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b failed for: [wheezy_udp@dns-test-service.dns-9968.svc.cluster.local wheezy_tcp@dns-test-service.dns-9968.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local jessie_udp@dns-test-service.dns-9968.svc.cluster.local jessie_tcp@dns-test-service.dns-9968.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local] Jan 4 14:38:49.049: INFO: Unable to read wheezy_udp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:49.057: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:49.065: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:49.073: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:49.141: INFO: Unable to read jessie_udp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:49.146: INFO: Unable to read jessie_tcp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:49.152: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:49.159: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:49.213: INFO: Lookups using dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b failed for: [wheezy_udp@dns-test-service.dns-9968.svc.cluster.local wheezy_tcp@dns-test-service.dns-9968.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local jessie_udp@dns-test-service.dns-9968.svc.cluster.local jessie_tcp@dns-test-service.dns-9968.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local] Jan 4 14:38:54.053: INFO: Unable to read wheezy_udp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:54.064: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:54.072: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:54.076: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:54.199: INFO: Unable to read jessie_udp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:54.205: INFO: Unable to read jessie_tcp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:54.208: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:54.213: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:54.302: INFO: Lookups using dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b failed for: [wheezy_udp@dns-test-service.dns-9968.svc.cluster.local wheezy_tcp@dns-test-service.dns-9968.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local jessie_udp@dns-test-service.dns-9968.svc.cluster.local jessie_tcp@dns-test-service.dns-9968.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local] Jan 4 14:38:59.048: INFO: Unable to read wheezy_udp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:59.055: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:59.060: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:59.065: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:59.094: INFO: Unable to read jessie_udp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:59.098: INFO: Unable to read jessie_tcp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:59.102: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:59.106: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:38:59.154: INFO: Lookups using dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b failed for: [wheezy_udp@dns-test-service.dns-9968.svc.cluster.local wheezy_tcp@dns-test-service.dns-9968.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local jessie_udp@dns-test-service.dns-9968.svc.cluster.local jessie_tcp@dns-test-service.dns-9968.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local] Jan 4 14:39:04.052: INFO: Unable to read wheezy_udp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:39:04.057: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:39:04.062: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:39:04.070: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:39:04.098: INFO: Unable to read jessie_udp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:39:04.103: INFO: Unable to read jessie_tcp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:39:04.106: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:39:04.109: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:39:04.135: INFO: Lookups using dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b failed for: [wheezy_udp@dns-test-service.dns-9968.svc.cluster.local wheezy_tcp@dns-test-service.dns-9968.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local jessie_udp@dns-test-service.dns-9968.svc.cluster.local jessie_tcp@dns-test-service.dns-9968.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local] Jan 4 14:39:09.047: INFO: Unable to read wheezy_udp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:39:09.068: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:39:09.074: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:39:09.078: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:39:09.124: INFO: Unable to read jessie_tcp@dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b: the server could not find the requested resource (get pods dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b) Jan 4 14:39:09.220: INFO: Lookups using dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b failed for: [wheezy_udp@dns-test-service.dns-9968.svc.cluster.local wheezy_tcp@dns-test-service.dns-9968.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9968.svc.cluster.local jessie_tcp@dns-test-service.dns-9968.svc.cluster.local] Jan 4 14:39:14.196: INFO: DNS probes using dns-9968/dns-test-ec3e1b97-c3ae-4cb1-8d02-8de883684c8b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:39:14.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9968" for this suite. • [SLOW TEST:51.297 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":114,"skipped":1869,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:39:14.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-36a5946b-0266-4df1-9322-650e4d9502db STEP: Creating a pod to test consume secrets Jan 4 14:39:15.195: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e89addca-6387-45af-8c2f-7362772fbd73" in namespace "projected-8001" to be "success or failure" Jan 4 14:39:15.276: INFO: Pod "pod-projected-secrets-e89addca-6387-45af-8c2f-7362772fbd73": Phase="Pending", Reason="", readiness=false. Elapsed: 80.650073ms Jan 4 14:39:17.280: INFO: Pod "pod-projected-secrets-e89addca-6387-45af-8c2f-7362772fbd73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08456691s Jan 4 14:39:19.285: INFO: Pod "pod-projected-secrets-e89addca-6387-45af-8c2f-7362772fbd73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089676546s Jan 4 14:39:21.316: INFO: Pod "pod-projected-secrets-e89addca-6387-45af-8c2f-7362772fbd73": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120876606s Jan 4 14:39:23.323: INFO: Pod "pod-projected-secrets-e89addca-6387-45af-8c2f-7362772fbd73": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127327338s Jan 4 14:39:25.328: INFO: Pod "pod-projected-secrets-e89addca-6387-45af-8c2f-7362772fbd73": Phase="Pending", Reason="", readiness=false. Elapsed: 10.132899114s Jan 4 14:39:27.341: INFO: Pod "pod-projected-secrets-e89addca-6387-45af-8c2f-7362772fbd73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.145618561s STEP: Saw pod success Jan 4 14:39:27.341: INFO: Pod "pod-projected-secrets-e89addca-6387-45af-8c2f-7362772fbd73" satisfied condition "success or failure" Jan 4 14:39:27.345: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-e89addca-6387-45af-8c2f-7362772fbd73 container projected-secret-volume-test: STEP: delete the pod Jan 4 14:39:27.421: INFO: Waiting for pod pod-projected-secrets-e89addca-6387-45af-8c2f-7362772fbd73 to disappear Jan 4 14:39:27.424: INFO: Pod pod-projected-secrets-e89addca-6387-45af-8c2f-7362772fbd73 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:39:27.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8001" for this suite. • [SLOW TEST:12.560 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1898,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:39:27.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-596ef4c7-9e5a-4447-ac31-3b255ed3fbfe [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:39:27.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2261" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":116,"skipped":1909,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:39:27.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jan 4 14:39:36.321: INFO: Successfully updated pod "labelsupdate344b7440-a6e6-4d78-895d-82e7841a4f5d" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:39:38.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8666" for this suite. • [SLOW TEST:10.869 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1910,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:39:38.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:39:38.684: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 4 14:39:38.721: INFO: Number of nodes with available pods: 0 Jan 4 14:39:38.721: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:39:40.322: INFO: Number of nodes with available pods: 0 Jan 4 14:39:40.322: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:39:41.136: INFO: Number of nodes with available pods: 0 Jan 4 14:39:41.136: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:39:41.742: INFO: Number of nodes with available pods: 0 Jan 4 14:39:41.743: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:39:42.732: INFO: Number of nodes with available pods: 0 Jan 4 14:39:42.732: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:39:44.877: INFO: Number of nodes with available pods: 0 Jan 4 14:39:44.877: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:39:45.799: INFO: Number of nodes with available pods: 0 Jan 4 14:39:45.799: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:39:47.639: INFO: Number of nodes with available pods: 0 Jan 4 14:39:47.639: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:39:47.965: INFO: Number of nodes with available pods: 0 Jan 4 14:39:47.965: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:39:48.731: INFO: Number of nodes with available pods: 0 Jan 4 14:39:48.731: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:39:49.748: INFO: Number of nodes with available pods: 0 Jan 4 14:39:49.749: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:39:50.732: INFO: Number of nodes with available pods: 1 Jan 4 14:39:50.732: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:39:51.727: INFO: Number of nodes with available pods: 2 Jan 4 14:39:51.727: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 4 14:39:51.775: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:39:51.775: INFO: Wrong image for pod: daemon-set-z7zxq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:39:52.799: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:39:52.799: INFO: Wrong image for pod: daemon-set-z7zxq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:39:54.284: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:39:54.284: INFO: Wrong image for pod: daemon-set-z7zxq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:39:54.796: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:39:54.796: INFO: Wrong image for pod: daemon-set-z7zxq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:39:55.810: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:39:55.810: INFO: Wrong image for pod: daemon-set-z7zxq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:39:55.810: INFO: Pod daemon-set-z7zxq is not available Jan 4 14:39:56.847: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:39:56.847: INFO: Wrong image for pod: daemon-set-z7zxq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:39:56.847: INFO: Pod daemon-set-z7zxq is not available Jan 4 14:39:57.802: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:39:57.802: INFO: Wrong image for pod: daemon-set-z7zxq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:39:57.802: INFO: Pod daemon-set-z7zxq is not available Jan 4 14:39:58.795: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:39:58.795: INFO: Wrong image for pod: daemon-set-z7zxq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:39:58.795: INFO: Pod daemon-set-z7zxq is not available Jan 4 14:39:59.854: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:39:59.854: INFO: Wrong image for pod: daemon-set-z7zxq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:39:59.854: INFO: Pod daemon-set-z7zxq is not available Jan 4 14:40:00.801: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:00.801: INFO: Wrong image for pod: daemon-set-z7zxq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:00.801: INFO: Pod daemon-set-z7zxq is not available Jan 4 14:40:01.802: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:01.802: INFO: Wrong image for pod: daemon-set-z7zxq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:01.802: INFO: Pod daemon-set-z7zxq is not available Jan 4 14:40:02.848: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:02.849: INFO: Wrong image for pod: daemon-set-z7zxq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:02.849: INFO: Pod daemon-set-z7zxq is not available Jan 4 14:40:03.802: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:03.802: INFO: Pod daemon-set-rlgxq is not available Jan 4 14:40:05.410: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:05.410: INFO: Pod daemon-set-rlgxq is not available Jan 4 14:40:05.935: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:05.936: INFO: Pod daemon-set-rlgxq is not available Jan 4 14:40:06.801: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:06.801: INFO: Pod daemon-set-rlgxq is not available Jan 4 14:40:07.796: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:07.796: INFO: Pod daemon-set-rlgxq is not available Jan 4 14:40:10.066: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:10.066: INFO: Pod daemon-set-rlgxq is not available Jan 4 14:40:11.892: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:11.893: INFO: Pod daemon-set-rlgxq is not available Jan 4 14:40:12.799: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:12.799: INFO: Pod daemon-set-rlgxq is not available Jan 4 14:40:13.796: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:13.796: INFO: Pod daemon-set-rlgxq is not available Jan 4 14:40:14.798: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:15.827: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:16.797: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:17.802: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:18.802: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:18.802: INFO: Pod daemon-set-hzxf9 is not available Jan 4 14:40:19.797: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:19.797: INFO: Pod daemon-set-hzxf9 is not available Jan 4 14:40:20.796: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:20.796: INFO: Pod daemon-set-hzxf9 is not available Jan 4 14:40:21.806: INFO: Wrong image for pod: daemon-set-hzxf9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 4 14:40:21.806: INFO: Pod daemon-set-hzxf9 is not available Jan 4 14:40:22.813: INFO: Pod daemon-set-p69lr is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 4 14:40:22.837: INFO: Number of nodes with available pods: 1 Jan 4 14:40:22.838: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:40:23.849: INFO: Number of nodes with available pods: 1 Jan 4 14:40:23.849: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:40:24.849: INFO: Number of nodes with available pods: 1 Jan 4 14:40:24.850: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:40:25.854: INFO: Number of nodes with available pods: 1 Jan 4 14:40:25.855: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:40:26.864: INFO: Number of nodes with available pods: 1 Jan 4 14:40:26.864: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:40:27.849: INFO: Number of nodes with available pods: 1 Jan 4 14:40:27.849: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:40:28.848: INFO: Number of nodes with available pods: 1 Jan 4 14:40:28.848: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:40:29.850: INFO: Number of nodes with available pods: 1 Jan 4 14:40:29.850: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:40:30.849: INFO: Number of nodes with available pods: 1 Jan 4 14:40:30.849: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:40:31.852: INFO: Number of nodes with available pods: 1 Jan 4 14:40:31.852: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:40:32.846: INFO: Number of nodes with available pods: 1 Jan 4 14:40:32.846: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:40:33.850: INFO: Number of nodes with available pods: 1 Jan 4 14:40:33.850: INFO: Node jerma-node is running more than one daemon pod Jan 4 14:40:34.904: INFO: Number of nodes with available pods: 2 Jan 4 14:40:34.904: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1793, will wait for the garbage collector to delete the pods Jan 4 14:40:34.976: INFO: Deleting DaemonSet.extensions daemon-set took: 4.826254ms Jan 4 14:40:35.376: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.460821ms Jan 4 14:40:53.181: INFO: Number of nodes with available pods: 0 Jan 4 14:40:53.181: INFO: Number of running nodes: 0, number of available pods: 0 Jan 4 14:40:53.184: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1793/daemonsets","resourceVersion":"33252"},"items":null} Jan 4 14:40:53.188: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1793/pods","resourceVersion":"33252"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:40:53.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1793" for this suite. • [SLOW TEST:74.766 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":118,"skipped":1917,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:40:53.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 4 14:40:53.304: INFO: Waiting up to 5m0s for pod "downward-api-b43b8566-a7a4-4cc9-b230-dfab1a06d63f" in namespace "downward-api-5099" to be "success or failure" Jan 4 14:40:53.308: INFO: Pod "downward-api-b43b8566-a7a4-4cc9-b230-dfab1a06d63f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.767442ms Jan 4 14:40:55.315: INFO: Pod "downward-api-b43b8566-a7a4-4cc9-b230-dfab1a06d63f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011662469s Jan 4 14:40:57.322: INFO: Pod "downward-api-b43b8566-a7a4-4cc9-b230-dfab1a06d63f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018141171s Jan 4 14:40:59.383: INFO: Pod "downward-api-b43b8566-a7a4-4cc9-b230-dfab1a06d63f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078969498s Jan 4 14:41:01.393: INFO: Pod "downward-api-b43b8566-a7a4-4cc9-b230-dfab1a06d63f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089552397s STEP: Saw pod success Jan 4 14:41:01.393: INFO: Pod "downward-api-b43b8566-a7a4-4cc9-b230-dfab1a06d63f" satisfied condition "success or failure" Jan 4 14:41:01.397: INFO: Trying to get logs from node jerma-node pod downward-api-b43b8566-a7a4-4cc9-b230-dfab1a06d63f container dapi-container: STEP: delete the pod Jan 4 14:41:01.448: INFO: Waiting for pod downward-api-b43b8566-a7a4-4cc9-b230-dfab1a06d63f to disappear Jan 4 14:41:01.453: INFO: Pod downward-api-b43b8566-a7a4-4cc9-b230-dfab1a06d63f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:41:01.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5099" for this suite. • [SLOW TEST:8.256 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1947,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:41:01.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 4 14:41:02.161: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 4 14:41:04.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745662, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745662, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745662, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745662, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:41:06.179: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745662, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745662, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745662, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745662, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:41:08.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745662, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745662, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745662, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745662, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 4 14:41:11.498: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:41:23.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8299" for this suite. STEP: Destroying namespace "webhook-8299-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:22.642 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":120,"skipped":2047,"failed":0} [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:41:24.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ea4230bc-0669-4d81-90fa-bc6af580a337 STEP: Creating a pod to test consume secrets Jan 4 14:41:24.411: INFO: Waiting up to 5m0s for pod "pod-secrets-1e7464cf-068a-463b-a0a0-0bc24412328b" in namespace "secrets-4162" to be "success or failure" Jan 4 14:41:24.443: INFO: Pod "pod-secrets-1e7464cf-068a-463b-a0a0-0bc24412328b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.532571ms Jan 4 14:41:26.449: INFO: Pod "pod-secrets-1e7464cf-068a-463b-a0a0-0bc24412328b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037670992s Jan 4 14:41:28.454: INFO: Pod "pod-secrets-1e7464cf-068a-463b-a0a0-0bc24412328b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043205612s Jan 4 14:41:30.460: INFO: Pod "pod-secrets-1e7464cf-068a-463b-a0a0-0bc24412328b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048943837s Jan 4 14:41:32.472: INFO: Pod "pod-secrets-1e7464cf-068a-463b-a0a0-0bc24412328b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061084019s Jan 4 14:41:34.479: INFO: Pod "pod-secrets-1e7464cf-068a-463b-a0a0-0bc24412328b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068236773s STEP: Saw pod success Jan 4 14:41:34.479: INFO: Pod "pod-secrets-1e7464cf-068a-463b-a0a0-0bc24412328b" satisfied condition "success or failure" Jan 4 14:41:34.482: INFO: Trying to get logs from node jerma-node pod pod-secrets-1e7464cf-068a-463b-a0a0-0bc24412328b container secret-volume-test: STEP: delete the pod Jan 4 14:41:34.561: INFO: Waiting for pod pod-secrets-1e7464cf-068a-463b-a0a0-0bc24412328b to disappear Jan 4 14:41:34.565: INFO: Pod pod-secrets-1e7464cf-068a-463b-a0a0-0bc24412328b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:41:34.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4162" for this suite. • [SLOW TEST:10.478 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":2047,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:41:34.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 4 14:41:34.706: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 4 14:41:39.711: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:41:39.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2487" for this suite. • [SLOW TEST:5.259 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":122,"skipped":2054,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:41:39.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 4 14:41:40.812: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 4 14:41:42.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:41:44.838: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:41:46.838: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:41:48.837: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:41:50.836: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:41:52.848: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:41:54.836: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745700, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 4 14:41:57.869: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:41:57.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:41:58.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9128" for this suite. STEP: Destroying namespace "webhook-9128-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.988 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":123,"skipped":2056,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:41:58.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 4 14:41:59.955: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 4 14:42:01.967: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745719, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745719, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745720, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745719, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:42:03.977: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745719, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745719, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745720, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745719, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:42:05.974: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745719, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745719, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745720, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745719, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:42:07.981: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745719, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745719, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745720, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745719, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:42:09.974: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745719, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745719, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745720, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745719, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 4 14:42:13.003: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:42:13.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:42:14.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9880" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:15.311 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":124,"skipped":2061,"failed":0} [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:42:14.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-d18d286c-1c8a-4ea5-8194-e27e0abddb9b in namespace container-probe-5085 Jan 4 14:42:22.342: INFO: Started pod liveness-d18d286c-1c8a-4ea5-8194-e27e0abddb9b in namespace container-probe-5085 STEP: checking the pod's current state and verifying that restartCount is present Jan 4 14:42:22.345: INFO: Initial restart count of pod liveness-d18d286c-1c8a-4ea5-8194-e27e0abddb9b is 0 Jan 4 14:42:38.617: INFO: Restart count of pod container-probe-5085/liveness-d18d286c-1c8a-4ea5-8194-e27e0abddb9b is now 1 (16.271750135s elapsed) Jan 4 14:42:56.671: INFO: Restart count of pod container-probe-5085/liveness-d18d286c-1c8a-4ea5-8194-e27e0abddb9b is now 2 (34.32567708s elapsed) Jan 4 14:43:16.722: INFO: Restart count of pod container-probe-5085/liveness-d18d286c-1c8a-4ea5-8194-e27e0abddb9b is now 3 (54.377266626s elapsed) Jan 4 14:43:34.816: INFO: Restart count of pod container-probe-5085/liveness-d18d286c-1c8a-4ea5-8194-e27e0abddb9b is now 4 (1m12.471453222s elapsed) Jan 4 14:44:41.083: INFO: Restart count of pod container-probe-5085/liveness-d18d286c-1c8a-4ea5-8194-e27e0abddb9b is now 5 (2m18.737733986s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:44:41.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5085" for this suite. • [SLOW TEST:147.042 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2061,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:44:41.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:44:53.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6801" for this suite. • [SLOW TEST:12.202 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2071,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:44:53.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Jan 4 14:44:53.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9162' Jan 4 14:44:54.058: INFO: stderr: "" Jan 4 14:44:54.058: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 4 14:44:54.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9162' Jan 4 14:44:54.269: INFO: stderr: "" Jan 4 14:44:54.270: INFO: stdout: "update-demo-nautilus-qvl54 update-demo-nautilus-vj7gs " Jan 4 14:44:54.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qvl54 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9162' Jan 4 14:44:54.510: INFO: stderr: "" Jan 4 14:44:54.510: INFO: stdout: "" Jan 4 14:44:54.510: INFO: update-demo-nautilus-qvl54 is created but not running Jan 4 14:44:59.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9162' Jan 4 14:45:00.226: INFO: stderr: "" Jan 4 14:45:00.227: INFO: stdout: "update-demo-nautilus-qvl54 update-demo-nautilus-vj7gs " Jan 4 14:45:00.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qvl54 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9162' Jan 4 14:45:00.401: INFO: stderr: "" Jan 4 14:45:00.401: INFO: stdout: "" Jan 4 14:45:00.401: INFO: update-demo-nautilus-qvl54 is created but not running Jan 4 14:45:05.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9162' Jan 4 14:45:05.554: INFO: stderr: "" Jan 4 14:45:05.554: INFO: stdout: "update-demo-nautilus-qvl54 update-demo-nautilus-vj7gs " Jan 4 14:45:05.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qvl54 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9162' Jan 4 14:45:05.683: INFO: stderr: "" Jan 4 14:45:05.683: INFO: stdout: "true" Jan 4 14:45:05.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qvl54 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9162' Jan 4 14:45:05.812: INFO: stderr: "" Jan 4 14:45:05.812: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 4 14:45:05.812: INFO: validating pod update-demo-nautilus-qvl54 Jan 4 14:45:05.828: INFO: got data: { "image": "nautilus.jpg" } Jan 4 14:45:05.828: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 4 14:45:05.828: INFO: update-demo-nautilus-qvl54 is verified up and running Jan 4 14:45:05.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vj7gs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9162' Jan 4 14:45:05.992: INFO: stderr: "" Jan 4 14:45:05.992: INFO: stdout: "true" Jan 4 14:45:05.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vj7gs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9162' Jan 4 14:45:06.074: INFO: stderr: "" Jan 4 14:45:06.074: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 4 14:45:06.074: INFO: validating pod update-demo-nautilus-vj7gs Jan 4 14:45:06.086: INFO: got data: { "image": "nautilus.jpg" } Jan 4 14:45:06.086: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 4 14:45:06.086: INFO: update-demo-nautilus-vj7gs is verified up and running STEP: using delete to clean up resources Jan 4 14:45:06.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9162' Jan 4 14:45:06.195: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 14:45:06.195: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 4 14:45:06.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9162' Jan 4 14:45:06.282: INFO: stderr: "No resources found in kubectl-9162 namespace.\n" Jan 4 14:45:06.282: INFO: stdout: "" Jan 4 14:45:06.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9162 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 4 14:45:06.370: INFO: stderr: "" Jan 4 14:45:06.370: INFO: stdout: "update-demo-nautilus-qvl54\nupdate-demo-nautilus-vj7gs\n" Jan 4 14:45:06.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9162' Jan 4 14:45:07.945: INFO: stderr: "No resources found in kubectl-9162 namespace.\n" Jan 4 14:45:07.945: INFO: stdout: "" Jan 4 14:45:07.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9162 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 4 14:45:08.190: INFO: stderr: "" Jan 4 14:45:08.190: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:45:08.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9162" for this suite. • [SLOW TEST:14.802 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":127,"skipped":2089,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:45:08.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7086 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7086 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7086 Jan 4 14:45:08.500: INFO: Found 0 stateful pods, waiting for 1 Jan 4 14:45:18.508: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 4 14:45:18.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7086 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 4 14:45:19.015: INFO: stderr: "I0104 14:45:18.732597 3198 log.go:172] (0xc0000f5130) (0xc0007f1ea0) Create stream\nI0104 14:45:18.732842 3198 log.go:172] (0xc0000f5130) (0xc0007f1ea0) Stream added, broadcasting: 1\nI0104 14:45:18.740660 3198 log.go:172] (0xc0000f5130) Reply frame received for 1\nI0104 14:45:18.740715 3198 log.go:172] (0xc0000f5130) (0xc00088c000) Create stream\nI0104 14:45:18.740725 3198 log.go:172] (0xc0000f5130) (0xc00088c000) Stream added, broadcasting: 3\nI0104 14:45:18.743294 3198 log.go:172] (0xc0000f5130) Reply frame received for 3\nI0104 14:45:18.743322 3198 log.go:172] (0xc0000f5130) (0xc0009a0500) Create stream\nI0104 14:45:18.743342 3198 log.go:172] (0xc0000f5130) (0xc0009a0500) Stream added, broadcasting: 5\nI0104 14:45:18.746805 3198 log.go:172] (0xc0000f5130) Reply frame received for 5\nI0104 14:45:18.922072 3198 log.go:172] (0xc0000f5130) Data frame received for 5\nI0104 14:45:18.922140 3198 log.go:172] (0xc0009a0500) (5) Data frame handling\nI0104 14:45:18.922173 3198 log.go:172] (0xc0009a0500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0104 14:45:18.940874 3198 log.go:172] (0xc0000f5130) Data frame received for 3\nI0104 14:45:18.940912 3198 log.go:172] (0xc00088c000) (3) Data frame handling\nI0104 14:45:18.940921 3198 log.go:172] (0xc00088c000) (3) Data frame sent\nI0104 14:45:19.010921 3198 log.go:172] (0xc0000f5130) (0xc00088c000) Stream removed, broadcasting: 3\nI0104 14:45:19.011173 3198 log.go:172] (0xc0000f5130) Data frame received for 1\nI0104 14:45:19.011197 3198 log.go:172] (0xc0007f1ea0) (1) Data frame handling\nI0104 14:45:19.011206 3198 log.go:172] (0xc0007f1ea0) (1) Data frame sent\nI0104 14:45:19.011221 3198 log.go:172] (0xc0000f5130) (0xc0009a0500) Stream removed, broadcasting: 5\nI0104 14:45:19.011298 3198 log.go:172] (0xc0000f5130) (0xc0007f1ea0) Stream removed, broadcasting: 1\nI0104 14:45:19.011328 3198 log.go:172] (0xc0000f5130) Go away received\nI0104 14:45:19.011755 3198 log.go:172] (0xc0000f5130) (0xc0007f1ea0) Stream removed, broadcasting: 1\nI0104 14:45:19.011778 3198 log.go:172] (0xc0000f5130) (0xc00088c000) Stream removed, broadcasting: 3\nI0104 14:45:19.011788 3198 log.go:172] (0xc0000f5130) (0xc0009a0500) Stream removed, broadcasting: 5\n" Jan 4 14:45:19.015: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 4 14:45:19.015: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 4 14:45:19.019: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 4 14:45:29.024: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 4 14:45:29.024: INFO: Waiting for statefulset status.replicas updated to 0 Jan 4 14:45:29.040: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999744s Jan 4 14:45:30.046: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.991272816s Jan 4 14:45:31.059: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.984577745s Jan 4 14:45:32.066: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.972094986s Jan 4 14:45:33.078: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.964371458s Jan 4 14:45:34.085: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.952541712s Jan 4 14:45:35.095: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.946270857s Jan 4 14:45:36.102: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.935416515s Jan 4 14:45:37.108: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.92855087s Jan 4 14:45:38.115: INFO: Verifying statefulset ss doesn't scale past 1 for another 923.125821ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7086 Jan 4 14:45:39.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7086 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:45:39.489: INFO: stderr: "I0104 14:45:39.330447 3211 log.go:172] (0xc000c24e70) (0xc0006b7f40) Create stream\nI0104 14:45:39.330607 3211 log.go:172] (0xc000c24e70) (0xc0006b7f40) Stream added, broadcasting: 1\nI0104 14:45:39.336372 3211 log.go:172] (0xc000c24e70) Reply frame received for 1\nI0104 14:45:39.336403 3211 log.go:172] (0xc000c24e70) (0xc000bfe0a0) Create stream\nI0104 14:45:39.336415 3211 log.go:172] (0xc000c24e70) (0xc000bfe0a0) Stream added, broadcasting: 3\nI0104 14:45:39.337328 3211 log.go:172] (0xc000c24e70) Reply frame received for 3\nI0104 14:45:39.337360 3211 log.go:172] (0xc000c24e70) (0xc000c100a0) Create stream\nI0104 14:45:39.337369 3211 log.go:172] (0xc000c24e70) (0xc000c100a0) Stream added, broadcasting: 5\nI0104 14:45:39.338878 3211 log.go:172] (0xc000c24e70) Reply frame received for 5\nI0104 14:45:39.409975 3211 log.go:172] (0xc000c24e70) Data frame received for 5\nI0104 14:45:39.410034 3211 log.go:172] (0xc000c100a0) (5) Data frame handling\nI0104 14:45:39.410053 3211 log.go:172] (0xc000c100a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0104 14:45:39.410068 3211 log.go:172] (0xc000c24e70) Data frame received for 3\nI0104 14:45:39.410080 3211 log.go:172] (0xc000bfe0a0) (3) Data frame handling\nI0104 14:45:39.410095 3211 log.go:172] (0xc000bfe0a0) (3) Data frame sent\nI0104 14:45:39.480141 3211 log.go:172] (0xc000c24e70) Data frame received for 1\nI0104 14:45:39.480180 3211 log.go:172] (0xc000c24e70) (0xc000bfe0a0) Stream removed, broadcasting: 3\nI0104 14:45:39.480201 3211 log.go:172] (0xc0006b7f40) (1) Data frame handling\nI0104 14:45:39.480225 3211 log.go:172] (0xc0006b7f40) (1) Data frame sent\nI0104 14:45:39.480233 3211 log.go:172] (0xc000c24e70) (0xc0006b7f40) Stream removed, broadcasting: 1\nI0104 14:45:39.480272 3211 log.go:172] (0xc000c24e70) (0xc000c100a0) Stream removed, broadcasting: 5\nI0104 14:45:39.480296 3211 log.go:172] (0xc000c24e70) Go away received\nI0104 14:45:39.480605 3211 log.go:172] (0xc000c24e70) (0xc0006b7f40) Stream removed, broadcasting: 1\nI0104 14:45:39.480637 3211 log.go:172] (0xc000c24e70) (0xc000bfe0a0) Stream removed, broadcasting: 3\nI0104 14:45:39.480674 3211 log.go:172] (0xc000c24e70) (0xc000c100a0) Stream removed, broadcasting: 5\n" Jan 4 14:45:39.489: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 4 14:45:39.489: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 4 14:45:39.495: INFO: Found 1 stateful pods, waiting for 3 Jan 4 14:45:49.502: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 4 14:45:49.502: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 4 14:45:49.502: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 4 14:45:59.502: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 4 14:45:59.502: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 4 14:45:59.502: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 4 14:45:59.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7086 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 4 14:46:00.017: INFO: stderr: "I0104 14:45:59.741239 3231 log.go:172] (0xc000b0c840) (0xc000a12dc0) Create stream\nI0104 14:45:59.741620 3231 log.go:172] (0xc000b0c840) (0xc000a12dc0) Stream added, broadcasting: 1\nI0104 14:45:59.765249 3231 log.go:172] (0xc000b0c840) Reply frame received for 1\nI0104 14:45:59.765351 3231 log.go:172] (0xc000b0c840) (0xc00063c780) Create stream\nI0104 14:45:59.765371 3231 log.go:172] (0xc000b0c840) (0xc00063c780) Stream added, broadcasting: 3\nI0104 14:45:59.768205 3231 log.go:172] (0xc000b0c840) Reply frame received for 3\nI0104 14:45:59.768257 3231 log.go:172] (0xc000b0c840) (0xc0004c3540) Create stream\nI0104 14:45:59.768281 3231 log.go:172] (0xc000b0c840) (0xc0004c3540) Stream added, broadcasting: 5\nI0104 14:45:59.770092 3231 log.go:172] (0xc000b0c840) Reply frame received for 5\nI0104 14:45:59.893075 3231 log.go:172] (0xc000b0c840) Data frame received for 5\nI0104 14:45:59.893231 3231 log.go:172] (0xc0004c3540) (5) Data frame handling\nI0104 14:45:59.893256 3231 log.go:172] (0xc0004c3540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0104 14:45:59.893307 3231 log.go:172] (0xc000b0c840) Data frame received for 3\nI0104 14:45:59.893314 3231 log.go:172] (0xc00063c780) (3) Data frame handling\nI0104 14:45:59.893324 3231 log.go:172] (0xc00063c780) (3) Data frame sent\nI0104 14:46:00.012336 3231 log.go:172] (0xc000b0c840) Data frame received for 1\nI0104 14:46:00.012398 3231 log.go:172] (0xc000a12dc0) (1) Data frame handling\nI0104 14:46:00.012423 3231 log.go:172] (0xc000a12dc0) (1) Data frame sent\nI0104 14:46:00.012633 3231 log.go:172] (0xc000b0c840) (0xc000a12dc0) Stream removed, broadcasting: 1\nI0104 14:46:00.012699 3231 log.go:172] (0xc000b0c840) (0xc00063c780) Stream removed, broadcasting: 3\nI0104 14:46:00.012932 3231 log.go:172] (0xc000b0c840) (0xc0004c3540) Stream removed, broadcasting: 5\nI0104 14:46:00.012982 3231 log.go:172] (0xc000b0c840) Go away received\nI0104 14:46:00.013011 3231 log.go:172] (0xc000b0c840) (0xc000a12dc0) Stream removed, broadcasting: 1\nI0104 14:46:00.013041 3231 log.go:172] (0xc000b0c840) (0xc00063c780) Stream removed, broadcasting: 3\nI0104 14:46:00.013068 3231 log.go:172] (0xc000b0c840) (0xc0004c3540) Stream removed, broadcasting: 5\n" Jan 4 14:46:00.017: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 4 14:46:00.017: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 4 14:46:00.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7086 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 4 14:46:00.423: INFO: stderr: "I0104 14:46:00.182522 3250 log.go:172] (0xc000ab8f20) (0xc000a9c5a0) Create stream\nI0104 14:46:00.182892 3250 log.go:172] (0xc000ab8f20) (0xc000a9c5a0) Stream added, broadcasting: 1\nI0104 14:46:00.195178 3250 log.go:172] (0xc000ab8f20) Reply frame received for 1\nI0104 14:46:00.195255 3250 log.go:172] (0xc000ab8f20) (0xc0005d0fa0) Create stream\nI0104 14:46:00.195306 3250 log.go:172] (0xc000ab8f20) (0xc0005d0fa0) Stream added, broadcasting: 3\nI0104 14:46:00.196575 3250 log.go:172] (0xc000ab8f20) Reply frame received for 3\nI0104 14:46:00.196642 3250 log.go:172] (0xc000ab8f20) (0xc00080fd60) Create stream\nI0104 14:46:00.196655 3250 log.go:172] (0xc000ab8f20) (0xc00080fd60) Stream added, broadcasting: 5\nI0104 14:46:00.197475 3250 log.go:172] (0xc000ab8f20) Reply frame received for 5\nI0104 14:46:00.265430 3250 log.go:172] (0xc000ab8f20) Data frame received for 5\nI0104 14:46:00.265591 3250 log.go:172] (0xc00080fd60) (5) Data frame handling\nI0104 14:46:00.265660 3250 log.go:172] (0xc00080fd60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0104 14:46:00.300367 3250 log.go:172] (0xc000ab8f20) Data frame received for 3\nI0104 14:46:00.300502 3250 log.go:172] (0xc0005d0fa0) (3) Data frame handling\nI0104 14:46:00.300546 3250 log.go:172] (0xc0005d0fa0) (3) Data frame sent\nI0104 14:46:00.405476 3250 log.go:172] (0xc000ab8f20) Data frame received for 1\nI0104 14:46:00.405579 3250 log.go:172] (0xc000ab8f20) (0xc0005d0fa0) Stream removed, broadcasting: 3\nI0104 14:46:00.405659 3250 log.go:172] (0xc000a9c5a0) (1) Data frame handling\nI0104 14:46:00.405685 3250 log.go:172] (0xc000a9c5a0) (1) Data frame sent\nI0104 14:46:00.405693 3250 log.go:172] (0xc000ab8f20) (0xc000a9c5a0) Stream removed, broadcasting: 1\nI0104 14:46:00.406050 3250 log.go:172] (0xc000ab8f20) (0xc00080fd60) Stream removed, broadcasting: 5\nI0104 14:46:00.406091 3250 log.go:172] (0xc000ab8f20) (0xc000a9c5a0) Stream removed, broadcasting: 1\nI0104 14:46:00.406098 3250 log.go:172] (0xc000ab8f20) (0xc0005d0fa0) Stream removed, broadcasting: 3\nI0104 14:46:00.406104 3250 log.go:172] (0xc000ab8f20) (0xc00080fd60) Stream removed, broadcasting: 5\n" Jan 4 14:46:00.423: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 4 14:46:00.423: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 4 14:46:00.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7086 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 4 14:46:00.871: INFO: stderr: "I0104 14:46:00.586614 3273 log.go:172] (0xc00097a6e0) (0xc0006db4a0) Create stream\nI0104 14:46:00.587397 3273 log.go:172] (0xc00097a6e0) (0xc0006db4a0) Stream added, broadcasting: 1\nI0104 14:46:00.606933 3273 log.go:172] (0xc00097a6e0) Reply frame received for 1\nI0104 14:46:00.606995 3273 log.go:172] (0xc00097a6e0) (0xc00096c000) Create stream\nI0104 14:46:00.607000 3273 log.go:172] (0xc00097a6e0) (0xc00096c000) Stream added, broadcasting: 3\nI0104 14:46:00.608642 3273 log.go:172] (0xc00097a6e0) Reply frame received for 3\nI0104 14:46:00.608663 3273 log.go:172] (0xc00097a6e0) (0xc000a34000) Create stream\nI0104 14:46:00.608670 3273 log.go:172] (0xc00097a6e0) (0xc000a34000) Stream added, broadcasting: 5\nI0104 14:46:00.609908 3273 log.go:172] (0xc00097a6e0) Reply frame received for 5\nI0104 14:46:00.739345 3273 log.go:172] (0xc00097a6e0) Data frame received for 5\nI0104 14:46:00.739456 3273 log.go:172] (0xc000a34000) (5) Data frame handling\nI0104 14:46:00.739489 3273 log.go:172] (0xc000a34000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0104 14:46:00.771249 3273 log.go:172] (0xc00097a6e0) Data frame received for 3\nI0104 14:46:00.771326 3273 log.go:172] (0xc00096c000) (3) Data frame handling\nI0104 14:46:00.771346 3273 log.go:172] (0xc00096c000) (3) Data frame sent\nI0104 14:46:00.863459 3273 log.go:172] (0xc00097a6e0) (0xc00096c000) Stream removed, broadcasting: 3\nI0104 14:46:00.863535 3273 log.go:172] (0xc00097a6e0) Data frame received for 1\nI0104 14:46:00.863550 3273 log.go:172] (0xc00097a6e0) (0xc000a34000) Stream removed, broadcasting: 5\nI0104 14:46:00.863571 3273 log.go:172] (0xc0006db4a0) (1) Data frame handling\nI0104 14:46:00.863578 3273 log.go:172] (0xc0006db4a0) (1) Data frame sent\nI0104 14:46:00.863591 3273 log.go:172] (0xc00097a6e0) (0xc0006db4a0) Stream removed, broadcasting: 1\nI0104 14:46:00.863601 3273 log.go:172] (0xc00097a6e0) Go away received\nI0104 14:46:00.864055 3273 log.go:172] (0xc00097a6e0) (0xc0006db4a0) Stream removed, broadcasting: 1\nI0104 14:46:00.864077 3273 log.go:172] (0xc00097a6e0) (0xc00096c000) Stream removed, broadcasting: 3\nI0104 14:46:00.864101 3273 log.go:172] (0xc00097a6e0) (0xc000a34000) Stream removed, broadcasting: 5\n" Jan 4 14:46:00.872: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 4 14:46:00.872: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 4 14:46:00.872: INFO: Waiting for statefulset status.replicas updated to 0 Jan 4 14:46:00.899: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 4 14:46:10.910: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 4 14:46:10.910: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 4 14:46:10.910: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 4 14:46:10.930: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999556s Jan 4 14:46:11.939: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988802675s Jan 4 14:46:12.945: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.979898471s Jan 4 14:46:13.951: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.974004945s Jan 4 14:46:14.961: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.967731087s Jan 4 14:46:15.968: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.957850318s Jan 4 14:46:16.972: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.951370699s Jan 4 14:46:18.008: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.947350348s Jan 4 14:46:19.147: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.910900835s Jan 4 14:46:20.153: INFO: Verifying statefulset ss doesn't scale past 3 for another 772.441706ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7086 Jan 4 14:46:21.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7086 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:46:24.087: INFO: stderr: "I0104 14:46:23.753295 3293 log.go:172] (0xc0007ae0b0) (0xc0002af4a0) Create stream\nI0104 14:46:23.753393 3293 log.go:172] (0xc0007ae0b0) (0xc0002af4a0) Stream added, broadcasting: 1\nI0104 14:46:23.762766 3293 log.go:172] (0xc0007ae0b0) Reply frame received for 1\nI0104 14:46:23.762827 3293 log.go:172] (0xc0007ae0b0) (0xc00085a0a0) Create stream\nI0104 14:46:23.762838 3293 log.go:172] (0xc0007ae0b0) (0xc00085a0a0) Stream added, broadcasting: 3\nI0104 14:46:23.764320 3293 log.go:172] (0xc0007ae0b0) Reply frame received for 3\nI0104 14:46:23.764562 3293 log.go:172] (0xc0007ae0b0) (0xc00085a140) Create stream\nI0104 14:46:23.764581 3293 log.go:172] (0xc0007ae0b0) (0xc00085a140) Stream added, broadcasting: 5\nI0104 14:46:23.768165 3293 log.go:172] (0xc0007ae0b0) Reply frame received for 5\nI0104 14:46:23.908989 3293 log.go:172] (0xc0007ae0b0) Data frame received for 3\nI0104 14:46:23.909124 3293 log.go:172] (0xc00085a0a0) (3) Data frame handling\nI0104 14:46:23.909153 3293 log.go:172] (0xc00085a0a0) (3) Data frame sent\nI0104 14:46:23.909208 3293 log.go:172] (0xc0007ae0b0) Data frame received for 5\nI0104 14:46:23.909220 3293 log.go:172] (0xc00085a140) (5) Data frame handling\nI0104 14:46:23.909238 3293 log.go:172] (0xc00085a140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0104 14:46:24.073136 3293 log.go:172] (0xc0007ae0b0) (0xc00085a0a0) Stream removed, broadcasting: 3\nI0104 14:46:24.073270 3293 log.go:172] (0xc0007ae0b0) Data frame received for 1\nI0104 14:46:24.073283 3293 log.go:172] (0xc0002af4a0) (1) Data frame handling\nI0104 14:46:24.073343 3293 log.go:172] (0xc0002af4a0) (1) Data frame sent\nI0104 14:46:24.073353 3293 log.go:172] (0xc0007ae0b0) (0xc0002af4a0) Stream removed, broadcasting: 1\nI0104 14:46:24.073413 3293 log.go:172] (0xc0007ae0b0) (0xc00085a140) Stream removed, broadcasting: 5\nI0104 14:46:24.073494 3293 log.go:172] (0xc0007ae0b0) Go away received\nI0104 14:46:24.074092 3293 log.go:172] (0xc0007ae0b0) (0xc0002af4a0) Stream removed, broadcasting: 1\nI0104 14:46:24.074127 3293 log.go:172] (0xc0007ae0b0) (0xc00085a0a0) Stream removed, broadcasting: 3\nI0104 14:46:24.074131 3293 log.go:172] (0xc0007ae0b0) (0xc00085a140) Stream removed, broadcasting: 5\n" Jan 4 14:46:24.087: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 4 14:46:24.087: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 4 14:46:24.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7086 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:46:24.406: INFO: stderr: "I0104 14:46:24.193497 3323 log.go:172] (0xc000930000) (0xc000770000) Create stream\nI0104 14:46:24.194031 3323 log.go:172] (0xc000930000) (0xc000770000) Stream added, broadcasting: 1\nI0104 14:46:24.197611 3323 log.go:172] (0xc000930000) Reply frame received for 1\nI0104 14:46:24.197706 3323 log.go:172] (0xc000930000) (0xc000872000) Create stream\nI0104 14:46:24.197721 3323 log.go:172] (0xc000930000) (0xc000872000) Stream added, broadcasting: 3\nI0104 14:46:24.199088 3323 log.go:172] (0xc000930000) Reply frame received for 3\nI0104 14:46:24.199114 3323 log.go:172] (0xc000930000) (0xc0007700a0) Create stream\nI0104 14:46:24.199129 3323 log.go:172] (0xc000930000) (0xc0007700a0) Stream added, broadcasting: 5\nI0104 14:46:24.200370 3323 log.go:172] (0xc000930000) Reply frame received for 5\nI0104 14:46:24.286845 3323 log.go:172] (0xc000930000) Data frame received for 3\nI0104 14:46:24.287037 3323 log.go:172] (0xc000872000) (3) Data frame handling\nI0104 14:46:24.287050 3323 log.go:172] (0xc000872000) (3) Data frame sent\nI0104 14:46:24.287080 3323 log.go:172] (0xc000930000) Data frame received for 5\nI0104 14:46:24.287086 3323 log.go:172] (0xc0007700a0) (5) Data frame handling\nI0104 14:46:24.287095 3323 log.go:172] (0xc0007700a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0104 14:46:24.401398 3323 log.go:172] (0xc000930000) Data frame received for 1\nI0104 14:46:24.401444 3323 log.go:172] (0xc000770000) (1) Data frame handling\nI0104 14:46:24.401458 3323 log.go:172] (0xc000770000) (1) Data frame sent\nI0104 14:46:24.401593 3323 log.go:172] (0xc000930000) (0xc000770000) Stream removed, broadcasting: 1\nI0104 14:46:24.401909 3323 log.go:172] (0xc000930000) (0xc000872000) Stream removed, broadcasting: 3\nI0104 14:46:24.402079 3323 log.go:172] (0xc000930000) (0xc0007700a0) Stream removed, broadcasting: 5\nI0104 14:46:24.402096 3323 log.go:172] (0xc000930000) (0xc000770000) Stream removed, broadcasting: 1\nI0104 14:46:24.402157 3323 log.go:172] (0xc000930000) (0xc000872000) Stream removed, broadcasting: 3\nI0104 14:46:24.402172 3323 log.go:172] (0xc000930000) (0xc0007700a0) Stream removed, broadcasting: 5\n" Jan 4 14:46:24.406: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 4 14:46:24.406: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 4 14:46:24.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7086 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 4 14:46:24.751: INFO: stderr: "I0104 14:46:24.551283 3341 log.go:172] (0xc000a1fce0) (0xc000930960) Create stream\nI0104 14:46:24.551411 3341 log.go:172] (0xc000a1fce0) (0xc000930960) Stream added, broadcasting: 1\nI0104 14:46:24.562258 3341 log.go:172] (0xc000a1fce0) Reply frame received for 1\nI0104 14:46:24.562304 3341 log.go:172] (0xc000a1fce0) (0xc0007dba40) Create stream\nI0104 14:46:24.562312 3341 log.go:172] (0xc000a1fce0) (0xc0007dba40) Stream added, broadcasting: 3\nI0104 14:46:24.563492 3341 log.go:172] (0xc000a1fce0) Reply frame received for 3\nI0104 14:46:24.563524 3341 log.go:172] (0xc000a1fce0) (0xc000670640) Create stream\nI0104 14:46:24.563532 3341 log.go:172] (0xc000a1fce0) (0xc000670640) Stream added, broadcasting: 5\nI0104 14:46:24.564370 3341 log.go:172] (0xc000a1fce0) Reply frame received for 5\nI0104 14:46:24.649442 3341 log.go:172] (0xc000a1fce0) Data frame received for 3\nI0104 14:46:24.649481 3341 log.go:172] (0xc0007dba40) (3) Data frame handling\nI0104 14:46:24.649498 3341 log.go:172] (0xc0007dba40) (3) Data frame sent\nI0104 14:46:24.649563 3341 log.go:172] (0xc000a1fce0) Data frame received for 5\nI0104 14:46:24.649572 3341 log.go:172] (0xc000670640) (5) Data frame handling\nI0104 14:46:24.649584 3341 log.go:172] (0xc000670640) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0104 14:46:24.746439 3341 log.go:172] (0xc000a1fce0) (0xc0007dba40) Stream removed, broadcasting: 3\nI0104 14:46:24.746582 3341 log.go:172] (0xc000a1fce0) Data frame received for 1\nI0104 14:46:24.746607 3341 log.go:172] (0xc000930960) (1) Data frame handling\nI0104 14:46:24.746737 3341 log.go:172] (0xc000a1fce0) (0xc000670640) Stream removed, broadcasting: 5\nI0104 14:46:24.746770 3341 log.go:172] (0xc000930960) (1) Data frame sent\nI0104 14:46:24.746776 3341 log.go:172] (0xc000a1fce0) (0xc000930960) Stream removed, broadcasting: 1\nI0104 14:46:24.746781 3341 log.go:172] (0xc000a1fce0) Go away received\nI0104 14:46:24.747277 3341 log.go:172] (0xc000a1fce0) (0xc000930960) Stream removed, broadcasting: 1\nI0104 14:46:24.747289 3341 log.go:172] (0xc000a1fce0) (0xc0007dba40) Stream removed, broadcasting: 3\nI0104 14:46:24.747296 3341 log.go:172] (0xc000a1fce0) (0xc000670640) Stream removed, broadcasting: 5\n" Jan 4 14:46:24.751: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 4 14:46:24.751: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 4 14:46:24.751: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 4 14:46:54.845: INFO: Deleting all statefulset in ns statefulset-7086 Jan 4 14:46:54.853: INFO: Scaling statefulset ss to 0 Jan 4 14:46:54.870: INFO: Waiting for statefulset status.replicas updated to 0 Jan 4 14:46:54.872: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:46:54.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7086" for this suite. • [SLOW TEST:106.723 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":128,"skipped":2099,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:46:54.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:46:55.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3044" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":129,"skipped":2102,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:46:55.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 4 14:46:55.415: INFO: Waiting up to 5m0s for pod "downward-api-da9f8f49-8628-48d3-8867-750334a14cb5" in namespace "downward-api-5935" to be "success or failure" Jan 4 14:46:55.556: INFO: Pod "downward-api-da9f8f49-8628-48d3-8867-750334a14cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 140.820377ms Jan 4 14:46:57.564: INFO: Pod "downward-api-da9f8f49-8628-48d3-8867-750334a14cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148182834s Jan 4 14:46:59.571: INFO: Pod "downward-api-da9f8f49-8628-48d3-8867-750334a14cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15515257s Jan 4 14:47:01.579: INFO: Pod "downward-api-da9f8f49-8628-48d3-8867-750334a14cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163659303s Jan 4 14:47:03.591: INFO: Pod "downward-api-da9f8f49-8628-48d3-8867-750334a14cb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.175183686s STEP: Saw pod success Jan 4 14:47:03.591: INFO: Pod "downward-api-da9f8f49-8628-48d3-8867-750334a14cb5" satisfied condition "success or failure" Jan 4 14:47:03.600: INFO: Trying to get logs from node jerma-node pod downward-api-da9f8f49-8628-48d3-8867-750334a14cb5 container dapi-container: STEP: delete the pod Jan 4 14:47:03.700: INFO: Waiting for pod downward-api-da9f8f49-8628-48d3-8867-750334a14cb5 to disappear Jan 4 14:47:03.716: INFO: Pod downward-api-da9f8f49-8628-48d3-8867-750334a14cb5 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:47:03.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5935" for this suite. • [SLOW TEST:8.563 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2134,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:47:03.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jan 4 14:47:03.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1977' Jan 4 14:47:04.378: INFO: stderr: "" Jan 4 14:47:04.378: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 4 14:47:05.454: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 14:47:05.455: INFO: Found 0 / 1 Jan 4 14:47:06.387: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 14:47:06.387: INFO: Found 0 / 1 Jan 4 14:47:07.416: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 14:47:07.416: INFO: Found 0 / 1 Jan 4 14:47:08.387: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 14:47:08.387: INFO: Found 0 / 1 Jan 4 14:47:09.387: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 14:47:09.387: INFO: Found 0 / 1 Jan 4 14:47:10.386: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 14:47:10.386: INFO: Found 0 / 1 Jan 4 14:47:11.389: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 14:47:11.389: INFO: Found 1 / 1 Jan 4 14:47:11.389: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 4 14:47:11.404: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 14:47:11.404: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 4 14:47:11.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-dm8tw --namespace=kubectl-1977 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 4 14:47:11.582: INFO: stderr: "" Jan 4 14:47:11.582: INFO: stdout: "pod/agnhost-master-dm8tw patched\n" STEP: checking annotations Jan 4 14:47:11.587: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 14:47:11.587: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:47:11.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1977" for this suite. • [SLOW TEST:7.866 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":131,"skipped":2136,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:47:11.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:47:11.688: INFO: Creating ReplicaSet my-hostname-basic-bb2b59fb-716c-4dcf-999b-9577bffd776f Jan 4 14:47:11.707: INFO: Pod name my-hostname-basic-bb2b59fb-716c-4dcf-999b-9577bffd776f: Found 0 pods out of 1 Jan 4 14:47:16.714: INFO: Pod name my-hostname-basic-bb2b59fb-716c-4dcf-999b-9577bffd776f: Found 1 pods out of 1 Jan 4 14:47:16.714: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-bb2b59fb-716c-4dcf-999b-9577bffd776f" is running Jan 4 14:47:22.755: INFO: Pod "my-hostname-basic-bb2b59fb-716c-4dcf-999b-9577bffd776f-d7d9s" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 14:47:11 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 14:47:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-bb2b59fb-716c-4dcf-999b-9577bffd776f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 14:47:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-bb2b59fb-716c-4dcf-999b-9577bffd776f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 14:47:11 +0000 UTC Reason: Message:}]) Jan 4 14:47:22.755: INFO: Trying to dial the pod Jan 4 14:47:27.793: INFO: Controller my-hostname-basic-bb2b59fb-716c-4dcf-999b-9577bffd776f: Got expected result from replica 1 [my-hostname-basic-bb2b59fb-716c-4dcf-999b-9577bffd776f-d7d9s]: "my-hostname-basic-bb2b59fb-716c-4dcf-999b-9577bffd776f-d7d9s", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:47:27.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9426" for this suite. • [SLOW TEST:16.210 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":132,"skipped":2149,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:47:27.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-11, will wait for the garbage collector to delete the pods Jan 4 14:47:42.026: INFO: Deleting Job.batch foo took: 19.729962ms Jan 4 14:47:42.127: INFO: Terminating Job.batch foo pods took: 100.476154ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:48:22.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-11" for this suite. • [SLOW TEST:54.662 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":133,"skipped":2165,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:48:22.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-100e5d12-6390-4daf-b5d0-10930dc30c21 STEP: Creating a pod to test consume configMaps Jan 4 14:48:22.602: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-72410432-3fa5-49dc-9c34-a7379d5ab870" in namespace "projected-3879" to be "success or failure" Jan 4 14:48:22.617: INFO: Pod "pod-projected-configmaps-72410432-3fa5-49dc-9c34-a7379d5ab870": Phase="Pending", Reason="", readiness=false. Elapsed: 14.422329ms Jan 4 14:48:24.625: INFO: Pod "pod-projected-configmaps-72410432-3fa5-49dc-9c34-a7379d5ab870": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021997962s Jan 4 14:48:26.630: INFO: Pod "pod-projected-configmaps-72410432-3fa5-49dc-9c34-a7379d5ab870": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027750379s Jan 4 14:48:28.636: INFO: Pod "pod-projected-configmaps-72410432-3fa5-49dc-9c34-a7379d5ab870": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033596467s Jan 4 14:48:30.642: INFO: Pod "pod-projected-configmaps-72410432-3fa5-49dc-9c34-a7379d5ab870": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039887699s Jan 4 14:48:32.647: INFO: Pod "pod-projected-configmaps-72410432-3fa5-49dc-9c34-a7379d5ab870": Phase="Pending", Reason="", readiness=false. Elapsed: 10.044622657s Jan 4 14:48:34.657: INFO: Pod "pod-projected-configmaps-72410432-3fa5-49dc-9c34-a7379d5ab870": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.054790655s STEP: Saw pod success Jan 4 14:48:34.658: INFO: Pod "pod-projected-configmaps-72410432-3fa5-49dc-9c34-a7379d5ab870" satisfied condition "success or failure" Jan 4 14:48:34.661: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-72410432-3fa5-49dc-9c34-a7379d5ab870 container projected-configmap-volume-test: STEP: delete the pod Jan 4 14:48:34.743: INFO: Waiting for pod pod-projected-configmaps-72410432-3fa5-49dc-9c34-a7379d5ab870 to disappear Jan 4 14:48:34.757: INFO: Pod pod-projected-configmaps-72410432-3fa5-49dc-9c34-a7379d5ab870 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:48:34.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3879" for this suite. • [SLOW TEST:12.306 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2186,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:48:34.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Jan 4 14:48:34.973: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:48:53.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8514" for this suite. • [SLOW TEST:18.255 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":135,"skipped":2240,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:48:53.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-d4ed9bdb-c44d-448b-b4ac-9d21a23a0fbd STEP: Creating a pod to test consume configMaps Jan 4 14:48:53.208: INFO: Waiting up to 5m0s for pod "pod-configmaps-f93f0fed-3a11-4111-8072-50d69e90e013" in namespace "configmap-2648" to be "success or failure" Jan 4 14:48:53.213: INFO: Pod "pod-configmaps-f93f0fed-3a11-4111-8072-50d69e90e013": Phase="Pending", Reason="", readiness=false. Elapsed: 5.551146ms Jan 4 14:48:55.223: INFO: Pod "pod-configmaps-f93f0fed-3a11-4111-8072-50d69e90e013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015146121s Jan 4 14:48:57.233: INFO: Pod "pod-configmaps-f93f0fed-3a11-4111-8072-50d69e90e013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025129388s Jan 4 14:48:59.241: INFO: Pod "pod-configmaps-f93f0fed-3a11-4111-8072-50d69e90e013": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033295356s Jan 4 14:49:01.247: INFO: Pod "pod-configmaps-f93f0fed-3a11-4111-8072-50d69e90e013": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039436235s Jan 4 14:49:03.254: INFO: Pod "pod-configmaps-f93f0fed-3a11-4111-8072-50d69e90e013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.045857163s STEP: Saw pod success Jan 4 14:49:03.254: INFO: Pod "pod-configmaps-f93f0fed-3a11-4111-8072-50d69e90e013" satisfied condition "success or failure" Jan 4 14:49:03.259: INFO: Trying to get logs from node jerma-node pod pod-configmaps-f93f0fed-3a11-4111-8072-50d69e90e013 container configmap-volume-test: STEP: delete the pod Jan 4 14:49:03.395: INFO: Waiting for pod pod-configmaps-f93f0fed-3a11-4111-8072-50d69e90e013 to disappear Jan 4 14:49:03.405: INFO: Pod pod-configmaps-f93f0fed-3a11-4111-8072-50d69e90e013 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:49:03.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2648" for this suite. • [SLOW TEST:10.377 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:49:03.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:49:03.606: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 4 14:49:06.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8783 create -f -' Jan 4 14:49:09.019: INFO: stderr: "" Jan 4 14:49:09.019: INFO: stdout: "e2e-test-crd-publish-openapi-7077-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 4 14:49:09.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8783 delete e2e-test-crd-publish-openapi-7077-crds test-cr' Jan 4 14:49:09.153: INFO: stderr: "" Jan 4 14:49:09.153: INFO: stdout: "e2e-test-crd-publish-openapi-7077-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jan 4 14:49:09.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8783 apply -f -' Jan 4 14:49:09.382: INFO: stderr: "" Jan 4 14:49:09.382: INFO: stdout: "e2e-test-crd-publish-openapi-7077-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 4 14:49:09.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8783 delete e2e-test-crd-publish-openapi-7077-crds test-cr' Jan 4 14:49:09.469: INFO: stderr: "" Jan 4 14:49:09.469: INFO: stdout: "e2e-test-crd-publish-openapi-7077-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 4 14:49:09.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7077-crds' Jan 4 14:49:09.784: INFO: stderr: "" Jan 4 14:49:09.784: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7077-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:49:11.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8783" for this suite. • [SLOW TEST:8.185 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":137,"skipped":2264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:49:11.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:49:11.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-903" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":138,"skipped":2368,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:49:11.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6435.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6435.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6435.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6435.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6435.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6435.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 4 14:49:25.969: INFO: DNS probes using dns-6435/dns-test-c713b718-89a0-4e14-a4e8-5991ce5a67ce succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:49:26.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6435" for this suite. • [SLOW TEST:14.265 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":139,"skipped":2397,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:49:26.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:49:39.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1247" for this suite. • [SLOW TEST:13.124 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":140,"skipped":2403,"failed":0} SS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:49:39.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-6bd007d6-416b-4133-9322-ea48cbeb0d84 STEP: Creating configMap with name cm-test-opt-upd-25f766d1-5058-4437-9a68-2c3f5fce0bc0 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6bd007d6-416b-4133-9322-ea48cbeb0d84 STEP: Updating configmap cm-test-opt-upd-25f766d1-5058-4437-9a68-2c3f5fce0bc0 STEP: Creating configMap with name cm-test-opt-create-d54ac95a-4a4b-484c-8115-12601e1440ef STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:49:53.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2015" for this suite. • [SLOW TEST:14.405 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2405,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:49:53.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 4 14:49:53.735: INFO: Waiting up to 5m0s for pod "pod-914cb714-cae7-4282-8a13-ec2332496130" in namespace "emptydir-4570" to be "success or failure" Jan 4 14:49:53.755: INFO: Pod "pod-914cb714-cae7-4282-8a13-ec2332496130": Phase="Pending", Reason="", readiness=false. Elapsed: 19.375891ms Jan 4 14:49:55.762: INFO: Pod "pod-914cb714-cae7-4282-8a13-ec2332496130": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027212492s Jan 4 14:49:57.768: INFO: Pod "pod-914cb714-cae7-4282-8a13-ec2332496130": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032971471s Jan 4 14:49:59.791: INFO: Pod "pod-914cb714-cae7-4282-8a13-ec2332496130": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055321716s Jan 4 14:50:01.796: INFO: Pod "pod-914cb714-cae7-4282-8a13-ec2332496130": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06059115s Jan 4 14:50:03.801: INFO: Pod "pod-914cb714-cae7-4282-8a13-ec2332496130": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065433053s STEP: Saw pod success Jan 4 14:50:03.801: INFO: Pod "pod-914cb714-cae7-4282-8a13-ec2332496130" satisfied condition "success or failure" Jan 4 14:50:03.804: INFO: Trying to get logs from node jerma-node pod pod-914cb714-cae7-4282-8a13-ec2332496130 container test-container: STEP: delete the pod Jan 4 14:50:03.988: INFO: Waiting for pod pod-914cb714-cae7-4282-8a13-ec2332496130 to disappear Jan 4 14:50:03.998: INFO: Pod pod-914cb714-cae7-4282-8a13-ec2332496130 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:50:03.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4570" for this suite. • [SLOW TEST:10.422 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2423,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:50:04.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1768 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 4 14:50:04.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-849' Jan 4 14:50:04.300: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 4 14:50:04.300: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1773 Jan 4 14:50:04.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-849' Jan 4 14:50:04.583: INFO: stderr: "" Jan 4 14:50:04.584: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:50:04.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-849" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":143,"skipped":2426,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:50:04.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2460 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 4 14:50:04.818: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 4 14:50:47.531: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-2460 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 14:50:47.531: INFO: >>> kubeConfig: /root/.kube/config I0104 14:50:47.577749 9 log.go:172] (0xc0027d2210) (0xc000f33400) Create stream I0104 14:50:47.577814 9 log.go:172] (0xc0027d2210) (0xc000f33400) Stream added, broadcasting: 1 I0104 14:50:47.584132 9 log.go:172] (0xc0027d2210) Reply frame received for 1 I0104 14:50:47.584156 9 log.go:172] (0xc0027d2210) (0xc000f9c1e0) Create stream I0104 14:50:47.584164 9 log.go:172] (0xc0027d2210) (0xc000f9c1e0) Stream added, broadcasting: 3 I0104 14:50:47.585574 9 log.go:172] (0xc0027d2210) Reply frame received for 3 I0104 14:50:47.585593 9 log.go:172] (0xc0027d2210) (0xc0015cab40) Create stream I0104 14:50:47.585605 9 log.go:172] (0xc0027d2210) (0xc0015cab40) Stream added, broadcasting: 5 I0104 14:50:47.589692 9 log.go:172] (0xc0027d2210) Reply frame received for 5 I0104 14:50:47.651491 9 log.go:172] (0xc0027d2210) Data frame received for 3 I0104 14:50:47.651534 9 log.go:172] (0xc000f9c1e0) (3) Data frame handling I0104 14:50:47.651547 9 log.go:172] (0xc000f9c1e0) (3) Data frame sent I0104 14:50:47.767747 9 log.go:172] (0xc0027d2210) (0xc000f9c1e0) Stream removed, broadcasting: 3 I0104 14:50:47.767870 9 log.go:172] (0xc0027d2210) Data frame received for 1 I0104 14:50:47.767888 9 log.go:172] (0xc000f33400) (1) Data frame handling I0104 14:50:47.767908 9 log.go:172] (0xc000f33400) (1) Data frame sent I0104 14:50:47.767918 9 log.go:172] (0xc0027d2210) (0xc000f33400) Stream removed, broadcasting: 1 I0104 14:50:47.768334 9 log.go:172] (0xc0027d2210) (0xc0015cab40) Stream removed, broadcasting: 5 I0104 14:50:47.768404 9 log.go:172] (0xc0027d2210) (0xc000f33400) Stream removed, broadcasting: 1 I0104 14:50:47.768440 9 log.go:172] (0xc0027d2210) (0xc000f9c1e0) Stream removed, broadcasting: 3 I0104 14:50:47.768448 9 log.go:172] (0xc0027d2210) (0xc0015cab40) Stream removed, broadcasting: 5 I0104 14:50:47.768517 9 log.go:172] (0xc0027d2210) Go away received Jan 4 14:50:47.768: INFO: Waiting for responses: map[] Jan 4 14:50:47.773: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-2460 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 14:50:47.773: INFO: >>> kubeConfig: /root/.kube/config I0104 14:50:47.818303 9 log.go:172] (0xc002b20370) (0xc000f9c460) Create stream I0104 14:50:47.818427 9 log.go:172] (0xc002b20370) (0xc000f9c460) Stream added, broadcasting: 1 I0104 14:50:47.826645 9 log.go:172] (0xc002b20370) Reply frame received for 1 I0104 14:50:47.826701 9 log.go:172] (0xc002b20370) (0xc001422500) Create stream I0104 14:50:47.826715 9 log.go:172] (0xc002b20370) (0xc001422500) Stream added, broadcasting: 3 I0104 14:50:47.829824 9 log.go:172] (0xc002b20370) Reply frame received for 3 I0104 14:50:47.829850 9 log.go:172] (0xc002b20370) (0xc000f33540) Create stream I0104 14:50:47.829862 9 log.go:172] (0xc002b20370) (0xc000f33540) Stream added, broadcasting: 5 I0104 14:50:47.831338 9 log.go:172] (0xc002b20370) Reply frame received for 5 I0104 14:50:47.961291 9 log.go:172] (0xc002b20370) Data frame received for 3 I0104 14:50:47.961401 9 log.go:172] (0xc001422500) (3) Data frame handling I0104 14:50:47.961438 9 log.go:172] (0xc001422500) (3) Data frame sent I0104 14:50:48.026526 9 log.go:172] (0xc002b20370) (0xc001422500) Stream removed, broadcasting: 3 I0104 14:50:48.026745 9 log.go:172] (0xc002b20370) (0xc000f33540) Stream removed, broadcasting: 5 I0104 14:50:48.026802 9 log.go:172] (0xc002b20370) Data frame received for 1 I0104 14:50:48.026841 9 log.go:172] (0xc000f9c460) (1) Data frame handling I0104 14:50:48.026864 9 log.go:172] (0xc000f9c460) (1) Data frame sent I0104 14:50:48.026873 9 log.go:172] (0xc002b20370) (0xc000f9c460) Stream removed, broadcasting: 1 I0104 14:50:48.026898 9 log.go:172] (0xc002b20370) Go away received I0104 14:50:48.027124 9 log.go:172] (0xc002b20370) (0xc000f9c460) Stream removed, broadcasting: 1 I0104 14:50:48.027156 9 log.go:172] (0xc002b20370) (0xc001422500) Stream removed, broadcasting: 3 I0104 14:50:48.027168 9 log.go:172] (0xc002b20370) (0xc000f33540) Stream removed, broadcasting: 5 Jan 4 14:50:48.027: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:50:48.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2460" for this suite. • [SLOW TEST:43.432 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2435,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:50:48.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 4 14:50:48.163: INFO: Waiting up to 5m0s for pod "pod-63a0f6a7-1ecc-4d11-ac2e-8cae13747419" in namespace "emptydir-4419" to be "success or failure" Jan 4 14:50:48.185: INFO: Pod "pod-63a0f6a7-1ecc-4d11-ac2e-8cae13747419": Phase="Pending", Reason="", readiness=false. Elapsed: 22.578992ms Jan 4 14:50:50.192: INFO: Pod "pod-63a0f6a7-1ecc-4d11-ac2e-8cae13747419": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029493419s Jan 4 14:50:52.201: INFO: Pod "pod-63a0f6a7-1ecc-4d11-ac2e-8cae13747419": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037968412s Jan 4 14:50:54.634: INFO: Pod "pod-63a0f6a7-1ecc-4d11-ac2e-8cae13747419": Phase="Pending", Reason="", readiness=false. Elapsed: 6.470910784s Jan 4 14:50:57.641: INFO: Pod "pod-63a0f6a7-1ecc-4d11-ac2e-8cae13747419": Phase="Pending", Reason="", readiness=false. Elapsed: 9.478393456s Jan 4 14:50:59.647: INFO: Pod "pod-63a0f6a7-1ecc-4d11-ac2e-8cae13747419": Phase="Pending", Reason="", readiness=false. Elapsed: 11.48466737s Jan 4 14:51:01.659: INFO: Pod "pod-63a0f6a7-1ecc-4d11-ac2e-8cae13747419": Phase="Pending", Reason="", readiness=false. Elapsed: 13.496739578s Jan 4 14:51:03.665: INFO: Pod "pod-63a0f6a7-1ecc-4d11-ac2e-8cae13747419": Phase="Pending", Reason="", readiness=false. Elapsed: 15.501854224s Jan 4 14:51:05.672: INFO: Pod "pod-63a0f6a7-1ecc-4d11-ac2e-8cae13747419": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.509334629s STEP: Saw pod success Jan 4 14:51:05.672: INFO: Pod "pod-63a0f6a7-1ecc-4d11-ac2e-8cae13747419" satisfied condition "success or failure" Jan 4 14:51:05.676: INFO: Trying to get logs from node jerma-node pod pod-63a0f6a7-1ecc-4d11-ac2e-8cae13747419 container test-container: STEP: delete the pod Jan 4 14:51:05.812: INFO: Waiting for pod pod-63a0f6a7-1ecc-4d11-ac2e-8cae13747419 to disappear Jan 4 14:51:05.865: INFO: Pod pod-63a0f6a7-1ecc-4d11-ac2e-8cae13747419 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:51:05.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4419" for this suite. • [SLOW TEST:17.843 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2437,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:51:05.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-ea4cb36b-36bd-4b0e-b1c0-ecaefd903907 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-ea4cb36b-36bd-4b0e-b1c0-ecaefd903907 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:52:31.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7626" for this suite. • [SLOW TEST:85.337 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2445,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:52:31.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5007.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5007.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5007.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5007.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 4 14:52:51.382: INFO: DNS probes using dns-test-d73da0e2-f5d7-4f83-8e7a-96089ff6ac8a succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5007.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5007.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5007.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5007.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 4 14:53:03.573: INFO: File wheezy_udp@dns-test-service-3.dns-5007.svc.cluster.local from pod dns-5007/dns-test-735c198b-5fc2-48b9-a82b-1c38fea9ba6a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 4 14:53:03.579: INFO: File jessie_udp@dns-test-service-3.dns-5007.svc.cluster.local from pod dns-5007/dns-test-735c198b-5fc2-48b9-a82b-1c38fea9ba6a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 4 14:53:03.579: INFO: Lookups using dns-5007/dns-test-735c198b-5fc2-48b9-a82b-1c38fea9ba6a failed for: [wheezy_udp@dns-test-service-3.dns-5007.svc.cluster.local jessie_udp@dns-test-service-3.dns-5007.svc.cluster.local] Jan 4 14:53:08.589: INFO: File wheezy_udp@dns-test-service-3.dns-5007.svc.cluster.local from pod dns-5007/dns-test-735c198b-5fc2-48b9-a82b-1c38fea9ba6a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 4 14:53:08.599: INFO: File jessie_udp@dns-test-service-3.dns-5007.svc.cluster.local from pod dns-5007/dns-test-735c198b-5fc2-48b9-a82b-1c38fea9ba6a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 4 14:53:08.599: INFO: Lookups using dns-5007/dns-test-735c198b-5fc2-48b9-a82b-1c38fea9ba6a failed for: [wheezy_udp@dns-test-service-3.dns-5007.svc.cluster.local jessie_udp@dns-test-service-3.dns-5007.svc.cluster.local] Jan 4 14:53:13.585: INFO: File wheezy_udp@dns-test-service-3.dns-5007.svc.cluster.local from pod dns-5007/dns-test-735c198b-5fc2-48b9-a82b-1c38fea9ba6a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 4 14:53:13.590: INFO: File jessie_udp@dns-test-service-3.dns-5007.svc.cluster.local from pod dns-5007/dns-test-735c198b-5fc2-48b9-a82b-1c38fea9ba6a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 4 14:53:13.590: INFO: Lookups using dns-5007/dns-test-735c198b-5fc2-48b9-a82b-1c38fea9ba6a failed for: [wheezy_udp@dns-test-service-3.dns-5007.svc.cluster.local jessie_udp@dns-test-service-3.dns-5007.svc.cluster.local] Jan 4 14:53:18.596: INFO: File jessie_udp@dns-test-service-3.dns-5007.svc.cluster.local from pod dns-5007/dns-test-735c198b-5fc2-48b9-a82b-1c38fea9ba6a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 4 14:53:18.596: INFO: Lookups using dns-5007/dns-test-735c198b-5fc2-48b9-a82b-1c38fea9ba6a failed for: [jessie_udp@dns-test-service-3.dns-5007.svc.cluster.local] Jan 4 14:53:23.606: INFO: DNS probes using dns-test-735c198b-5fc2-48b9-a82b-1c38fea9ba6a succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5007.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5007.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5007.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5007.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 4 14:53:43.974: INFO: DNS probes using dns-test-c44b66eb-b485-425d-a006-3c896545df92 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:53:44.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5007" for this suite. • [SLOW TEST:72.878 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":147,"skipped":2447,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:53:44.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 4 14:53:44.184: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6428 /api/v1/namespaces/watch-6428/configmaps/e2e-watch-test-label-changed 30d72e93-e298-43a2-b774-ea52e0d03eb1 36290 0 2020-01-04 14:53:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 4 14:53:44.185: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6428 /api/v1/namespaces/watch-6428/configmaps/e2e-watch-test-label-changed 30d72e93-e298-43a2-b774-ea52e0d03eb1 36291 0 2020-01-04 14:53:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 4 14:53:44.186: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6428 /api/v1/namespaces/watch-6428/configmaps/e2e-watch-test-label-changed 30d72e93-e298-43a2-b774-ea52e0d03eb1 36292 0 2020-01-04 14:53:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 4 14:53:54.248: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6428 /api/v1/namespaces/watch-6428/configmaps/e2e-watch-test-label-changed 30d72e93-e298-43a2-b774-ea52e0d03eb1 36364 0 2020-01-04 14:53:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 4 14:53:54.248: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6428 /api/v1/namespaces/watch-6428/configmaps/e2e-watch-test-label-changed 30d72e93-e298-43a2-b774-ea52e0d03eb1 36365 0 2020-01-04 14:53:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 4 14:53:54.248: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6428 /api/v1/namespaces/watch-6428/configmaps/e2e-watch-test-label-changed 30d72e93-e298-43a2-b774-ea52e0d03eb1 36366 0 2020-01-04 14:53:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:53:54.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6428" for this suite. • [SLOW TEST:10.162 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":148,"skipped":2455,"failed":0} SSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:53:54.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-d5f574a4-dc8c-4525-a30b-0af1b1acba2e STEP: Creating secret with name s-test-opt-upd-d45b30ae-adb2-4b52-a56b-f6e24211597a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-d5f574a4-dc8c-4525-a30b-0af1b1acba2e STEP: Updating secret s-test-opt-upd-d45b30ae-adb2-4b52-a56b-f6e24211597a STEP: Creating secret with name s-test-opt-create-a39c14e3-48bb-4c01-a9eb-867bf17806ac STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:55:17.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3428" for this suite. • [SLOW TEST:83.247 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2458,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:55:17.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-e6d2f663-135a-49d9-88f0-ad1f09306230 STEP: Creating a pod to test consume secrets Jan 4 14:55:17.617: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6e3a1cf5-f50d-4e16-9905-55b47b8edc76" in namespace "projected-5460" to be "success or failure" Jan 4 14:55:17.630: INFO: Pod "pod-projected-secrets-6e3a1cf5-f50d-4e16-9905-55b47b8edc76": Phase="Pending", Reason="", readiness=false. Elapsed: 12.311551ms Jan 4 14:55:19.636: INFO: Pod "pod-projected-secrets-6e3a1cf5-f50d-4e16-9905-55b47b8edc76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018285655s Jan 4 14:55:21.643: INFO: Pod "pod-projected-secrets-6e3a1cf5-f50d-4e16-9905-55b47b8edc76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025081259s Jan 4 14:55:23.648: INFO: Pod "pod-projected-secrets-6e3a1cf5-f50d-4e16-9905-55b47b8edc76": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030581413s Jan 4 14:55:25.654: INFO: Pod "pod-projected-secrets-6e3a1cf5-f50d-4e16-9905-55b47b8edc76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.036398803s STEP: Saw pod success Jan 4 14:55:25.654: INFO: Pod "pod-projected-secrets-6e3a1cf5-f50d-4e16-9905-55b47b8edc76" satisfied condition "success or failure" Jan 4 14:55:25.658: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-6e3a1cf5-f50d-4e16-9905-55b47b8edc76 container projected-secret-volume-test: STEP: delete the pod Jan 4 14:55:25.710: INFO: Waiting for pod pod-projected-secrets-6e3a1cf5-f50d-4e16-9905-55b47b8edc76 to disappear Jan 4 14:55:25.719: INFO: Pod pod-projected-secrets-6e3a1cf5-f50d-4e16-9905-55b47b8edc76 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:55:25.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5460" for this suite. • [SLOW TEST:8.295 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2473,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:55:25.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-3e5d973b-6c51-42d5-a1e3-4c6d1aee4e47 STEP: Creating a pod to test consume secrets Jan 4 14:55:25.952: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-79d2120c-bbbf-4d84-a071-ef506b65e9e8" in namespace "projected-5263" to be "success or failure" Jan 4 14:55:25.985: INFO: Pod "pod-projected-secrets-79d2120c-bbbf-4d84-a071-ef506b65e9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 31.843523ms Jan 4 14:55:27.994: INFO: Pod "pod-projected-secrets-79d2120c-bbbf-4d84-a071-ef506b65e9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040802433s Jan 4 14:55:30.000: INFO: Pod "pod-projected-secrets-79d2120c-bbbf-4d84-a071-ef506b65e9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047243363s Jan 4 14:55:32.006: INFO: Pod "pod-projected-secrets-79d2120c-bbbf-4d84-a071-ef506b65e9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053474549s Jan 4 14:55:34.010: INFO: Pod "pod-projected-secrets-79d2120c-bbbf-4d84-a071-ef506b65e9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057713798s Jan 4 14:55:36.016: INFO: Pod "pod-projected-secrets-79d2120c-bbbf-4d84-a071-ef506b65e9e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063273857s STEP: Saw pod success Jan 4 14:55:36.016: INFO: Pod "pod-projected-secrets-79d2120c-bbbf-4d84-a071-ef506b65e9e8" satisfied condition "success or failure" Jan 4 14:55:36.020: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-79d2120c-bbbf-4d84-a071-ef506b65e9e8 container projected-secret-volume-test: STEP: delete the pod Jan 4 14:55:36.057: INFO: Waiting for pod pod-projected-secrets-79d2120c-bbbf-4d84-a071-ef506b65e9e8 to disappear Jan 4 14:55:36.123: INFO: Pod pod-projected-secrets-79d2120c-bbbf-4d84-a071-ef506b65e9e8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:55:36.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5263" for this suite. • [SLOW TEST:10.332 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2481,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:55:36.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jan 4 14:55:36.294: INFO: namespace kubectl-4366 Jan 4 14:55:36.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4366' Jan 4 14:55:36.863: INFO: stderr: "" Jan 4 14:55:36.863: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 4 14:55:37.871: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 14:55:37.871: INFO: Found 0 / 1 Jan 4 14:55:38.873: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 14:55:38.873: INFO: Found 0 / 1 Jan 4 14:55:39.871: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 14:55:39.871: INFO: Found 0 / 1 Jan 4 14:55:40.870: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 14:55:40.871: INFO: Found 0 / 1 Jan 4 14:55:41.871: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 14:55:41.871: INFO: Found 0 / 1 Jan 4 14:55:42.875: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 14:55:42.875: INFO: Found 0 / 1 Jan 4 14:55:43.871: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 14:55:43.871: INFO: Found 0 / 1 Jan 4 14:55:44.871: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 14:55:44.871: INFO: Found 0 / 1 Jan 4 14:55:45.872: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 14:55:45.872: INFO: Found 1 / 1 Jan 4 14:55:45.872: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 4 14:55:45.877: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 14:55:45.877: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 4 14:55:45.877: INFO: wait on agnhost-master startup in kubectl-4366 Jan 4 14:55:45.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-z7gqp agnhost-master --namespace=kubectl-4366' Jan 4 14:55:46.090: INFO: stderr: "" Jan 4 14:55:46.090: INFO: stdout: "Paused\n" STEP: exposing RC Jan 4 14:55:46.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4366' Jan 4 14:55:46.261: INFO: stderr: "" Jan 4 14:55:46.261: INFO: stdout: "service/rm2 exposed\n" Jan 4 14:55:46.455: INFO: Service rm2 in namespace kubectl-4366 found. STEP: exposing service Jan 4 14:55:48.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4366' Jan 4 14:55:48.693: INFO: stderr: "" Jan 4 14:55:48.693: INFO: stdout: "service/rm3 exposed\n" Jan 4 14:55:48.698: INFO: Service rm3 in namespace kubectl-4366 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:55:50.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4366" for this suite. • [SLOW TEST:14.581 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":152,"skipped":2489,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:55:50.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Jan 4 14:56:03.323: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9772 pod-service-account-b103c0d9-4060-4782-949b-8eeb11f5ac32 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jan 4 14:56:03.752: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9772 pod-service-account-b103c0d9-4060-4782-949b-8eeb11f5ac32 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jan 4 14:56:04.203: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9772 pod-service-account-b103c0d9-4060-4782-949b-8eeb11f5ac32 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:56:04.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9772" for this suite. • [SLOW TEST:13.795 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":153,"skipped":2501,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:56:04.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-05d6af98-f3cc-430c-aabe-341592e5bdd9 Jan 4 14:56:04.718: INFO: Pod name my-hostname-basic-05d6af98-f3cc-430c-aabe-341592e5bdd9: Found 0 pods out of 1 Jan 4 14:56:09.764: INFO: Pod name my-hostname-basic-05d6af98-f3cc-430c-aabe-341592e5bdd9: Found 1 pods out of 1 Jan 4 14:56:09.764: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-05d6af98-f3cc-430c-aabe-341592e5bdd9" are running Jan 4 14:56:15.781: INFO: Pod "my-hostname-basic-05d6af98-f3cc-430c-aabe-341592e5bdd9-7sb95" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 14:56:04 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 14:56:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-05d6af98-f3cc-430c-aabe-341592e5bdd9]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 14:56:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-05d6af98-f3cc-430c-aabe-341592e5bdd9]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 14:56:04 +0000 UTC Reason: Message:}]) Jan 4 14:56:15.781: INFO: Trying to dial the pod Jan 4 14:56:20.805: INFO: Controller my-hostname-basic-05d6af98-f3cc-430c-aabe-341592e5bdd9: Got expected result from replica 1 [my-hostname-basic-05d6af98-f3cc-430c-aabe-341592e5bdd9-7sb95]: "my-hostname-basic-05d6af98-f3cc-430c-aabe-341592e5bdd9-7sb95", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:56:20.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2800" for this suite. • [SLOW TEST:16.323 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":154,"skipped":2505,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:56:20.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 14:56:20.985: INFO: Creating deployment "webserver-deployment" Jan 4 14:56:20.994: INFO: Waiting for observed generation 1 Jan 4 14:56:23.947: INFO: Waiting for all required pods to come up Jan 4 14:56:23.961: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 4 14:56:44.650: INFO: Waiting for deployment "webserver-deployment" to complete Jan 4 14:56:44.676: INFO: Updating deployment "webserver-deployment" with a non-existent image Jan 4 14:56:44.681: INFO: Updating deployment webserver-deployment Jan 4 14:56:44.681: INFO: Waiting for observed generation 2 Jan 4 14:56:46.906: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 4 14:56:47.124: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 4 14:56:47.699: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 4 14:56:47.926: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 4 14:56:47.926: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 4 14:56:47.955: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 4 14:56:49.272: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jan 4 14:56:49.272: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jan 4 14:56:49.286: INFO: Updating deployment webserver-deployment Jan 4 14:56:49.286: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jan 4 14:56:49.521: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 4 14:56:52.931: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jan 4 14:56:56.388: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5580 /apis/apps/v1/namespaces/deployment-5580/deployments/webserver-deployment 29a2efcd-8352-48f4-b78d-389720405bf0 37211 3 2020-01-04 14:56:20 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003e322a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-04 14:56:49 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-01-04 14:56:50 +0000 UTC,LastTransitionTime:2020-01-04 14:56:20 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jan 4 14:56:56.403: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-5580 /apis/apps/v1/namespaces/deployment-5580/replicasets/webserver-deployment-c7997dcc8 b06dfc15-c3c5-46ed-9f5b-5dc9f6f9ccbd 37209 3 2020-01-04 14:56:44 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 29a2efcd-8352-48f4-b78d-389720405bf0 0xc003370787 0xc003370788}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033707f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 4 14:56:56.404: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jan 4 14:56:56.404: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-5580 /apis/apps/v1/namespaces/deployment-5580/replicasets/webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 37190 3 2020-01-04 14:56:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 29a2efcd-8352-48f4-b78d-389720405bf0 0xc0033706c7 0xc0033706c8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003370728 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jan 4 14:56:57.805: INFO: Pod "webserver-deployment-595b5b9587-6vpwq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6vpwq webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-6vpwq a803dd1d-4efe-4e5b-8006-0fbc086ebf62 37178 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc003370cd7 0xc003370cd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.806: INFO: Pod "webserver-deployment-595b5b9587-88rsv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-88rsv webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-88rsv d68182a6-618b-4bb7-a42f-45e0e0f59090 37177 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc003370df0 0xc003370df1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.806: INFO: Pod "webserver-deployment-595b5b9587-8jgpl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8jgpl webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-8jgpl 06ba18ae-51a2-4d22-834c-217f3280abf4 37214 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc003370f00 0xc003370f01}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-04 14:56:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.806: INFO: Pod "webserver-deployment-595b5b9587-9szrv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9szrv webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-9szrv 781f9b42-5498-4da9-bc0a-750271a9922d 37175 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc003371057 0xc003371058}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.806: INFO: Pod "webserver-deployment-595b5b9587-b6svj" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-b6svj webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-b6svj a901bc99-43aa-47ef-810a-ca1a104fbdbd 37076 0 2020-01-04 14:56:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc003371170 0xc003371171}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-01-04 14:56:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-04 14:56:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://d9be157a2d4a1157d6e64023acf652dca2f5b9da3ebe0118007809a567da76f8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.807: INFO: Pod "webserver-deployment-595b5b9587-b72bj" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-b72bj webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-b72bj cae94230-4e69-46a9-b854-3fccd34942f1 37070 0 2020-01-04 14:56:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc0033712e0 0xc0033712e1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-04 14:56:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-04 14:56:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://a63e9461985137ae263fd8d0474a2ea3b263f1326332aac5676aa0239403b19f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.807: INFO: Pod "webserver-deployment-595b5b9587-c9nrd" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c9nrd webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-c9nrd a27ebf33-0437-4748-bb52-afdbe4f14493 37047 0 2020-01-04 14:56:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc003371450 0xc003371451}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-01-04 14:56:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-04 14:56:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://4b269f753872daf2cbc013093ffd26c8a898728dd02df73f90684eeeaea959d8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.807: INFO: Pod "webserver-deployment-595b5b9587-dz7nw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dz7nw webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-dz7nw e55f87de-9d6d-4afd-ba8f-54e8e3da9ae1 37215 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc0033715b0 0xc0033715b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-04 14:56:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.807: INFO: Pod "webserver-deployment-595b5b9587-gv56g" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gv56g webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-gv56g 10df0a28-093d-4854-85aa-f7cfe89ac1ee 37041 0 2020-01-04 14:56:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc0033716f7 0xc0033716f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-01-04 14:56:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-04 14:56:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://b014c0d234291f0e83ac8a6d3713b5360281d824f44b74e1bc5eff66211e1e56,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.808: INFO: Pod "webserver-deployment-595b5b9587-gxlxs" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gxlxs webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-gxlxs d99dc36f-eab1-4ca6-aadc-d589c438174e 37172 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc003371860 0xc003371861}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.808: INFO: Pod "webserver-deployment-595b5b9587-hqggt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hqggt webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-hqggt f2d56634-d23c-43c1-9a17-4fb3c88584e5 37165 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc003371970 0xc003371971}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.808: INFO: Pod "webserver-deployment-595b5b9587-jc5g2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jc5g2 webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-jc5g2 95a20141-9ee2-450f-9d50-38ad83ea4bba 37054 0 2020-01-04 14:56:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc003371a80 0xc003371a81}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-01-04 14:56:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-04 14:56:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://7c29f17668746613adc43f3d5053c94425a76b76abd91f0e0f5dac46ad92480e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.808: INFO: Pod "webserver-deployment-595b5b9587-mbwmv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mbwmv webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-mbwmv 070ac8dc-2e87-4888-b162-1796131c05ff 37222 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc003371be0 0xc003371be1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-04 14:56:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.809: INFO: Pod "webserver-deployment-595b5b9587-pjnvc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pjnvc webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-pjnvc 9ddcab9f-7015-460e-8a33-e32900f49a9d 37223 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc003371d27 0xc003371d28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-04 14:56:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.809: INFO: Pod "webserver-deployment-595b5b9587-qmc7d" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qmc7d webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-qmc7d 3f6736c8-2451-4a74-a945-6d2def8580d2 37225 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc003371e87 0xc003371e88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-04 14:56:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.809: INFO: Pod "webserver-deployment-595b5b9587-qwqhg" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qwqhg webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-qwqhg 91552373-1d4f-4d92-acb9-7a9e84a1a25f 37044 0 2020-01-04 14:56:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc003371fe7 0xc003371fe8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.8,StartTime:2020-01-04 14:56:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-04 14:56:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://7ae98c4bc1b4803a4a120a618719ad1b965ec5b38a8b2f4ac53206104d435531,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.809: INFO: Pod "webserver-deployment-595b5b9587-s59bz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-s59bz webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-s59bz fe55248f-2a2f-4d01-b7a5-bf51a8633a96 37226 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc0047b6150 0xc0047b6151}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-04 14:56:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.810: INFO: Pod "webserver-deployment-595b5b9587-tzs22" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tzs22 webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-tzs22 ed146f22-7396-4833-ba0a-7cab81b087c9 37064 0 2020-01-04 14:56:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc0047b62d7 0xc0047b62d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-01-04 14:56:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-04 14:56:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://3a531a29c4eb1122c4fe10c76a40f5cc2a6ceb8bcdc5736d2f117b69b1741e35,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.810: INFO: Pod "webserver-deployment-595b5b9587-wb5xg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wb5xg webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-wb5xg ac5b7734-de06-4768-8802-15d5cabef235 37179 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc0047b6460 0xc0047b6461}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.810: INFO: Pod "webserver-deployment-595b5b9587-xbr87" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xbr87 webserver-deployment-595b5b9587- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-595b5b9587-xbr87 8fc64dad-9294-4103-8de4-a927dcff5417 37038 0 2020-01-04 14:56:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 edd01113-d157-4ba5-8274-46e9d8561703 0xc0047b6570 0xc0047b6571}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-01-04 14:56:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-04 14:56:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://af320bcd328985ecee44d7753cb1b82883e32b65454f2e6b2d81a13a2497576d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.811: INFO: Pod "webserver-deployment-c7997dcc8-57gp6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-57gp6 webserver-deployment-c7997dcc8- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-c7997dcc8-57gp6 d7894546-4f5e-40ca-bbc3-1bfabca8813e 37208 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b06dfc15-c3c5-46ed-9f5b-5dc9f6f9ccbd 0xc0047b66d0 0xc0047b66d1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-04 14:56:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.811: INFO: Pod "webserver-deployment-c7997dcc8-5flqg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5flqg webserver-deployment-c7997dcc8- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-c7997dcc8-5flqg c7f9a805-6937-4528-8177-d802871a4ea6 37181 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b06dfc15-c3c5-46ed-9f5b-5dc9f6f9ccbd 0xc0047b6830 0xc0047b6831}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.811: INFO: Pod "webserver-deployment-c7997dcc8-f2jcv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f2jcv webserver-deployment-c7997dcc8- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-c7997dcc8-f2jcv c22ba1ad-f077-4a88-9f53-d554a912fe2e 37199 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b06dfc15-c3c5-46ed-9f5b-5dc9f6f9ccbd 0xc0047b6940 0xc0047b6941}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.811: INFO: Pod "webserver-deployment-c7997dcc8-f9p2f" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f9p2f webserver-deployment-c7997dcc8- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-c7997dcc8-f9p2f 4d488660-de9d-4229-a18d-e02072e26b9e 37187 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b06dfc15-c3c5-46ed-9f5b-5dc9f6f9ccbd 0xc0047b6a60 0xc0047b6a61}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.812: INFO: Pod "webserver-deployment-c7997dcc8-ggc6s" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ggc6s webserver-deployment-c7997dcc8- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-c7997dcc8-ggc6s ac3c2463-10f6-4681-a102-539a8461996a 37127 0 2020-01-04 14:56:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b06dfc15-c3c5-46ed-9f5b-5dc9f6f9ccbd 0xc0047b6b80 0xc0047b6b81}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-04 14:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.812: INFO: Pod "webserver-deployment-c7997dcc8-hrqhs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hrqhs webserver-deployment-c7997dcc8- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-c7997dcc8-hrqhs 3bb89dfe-68aa-4d81-b3a1-d55cd3ac378a 37125 0 2020-01-04 14:56:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b06dfc15-c3c5-46ed-9f5b-5dc9f6f9ccbd 0xc0047b6cf0 0xc0047b6cf1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-04 14:56:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.812: INFO: Pod "webserver-deployment-c7997dcc8-j49dv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-j49dv webserver-deployment-c7997dcc8- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-c7997dcc8-j49dv 1c7d9a7a-e8e2-4a17-a9b1-3ae1c4161fd2 37188 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b06dfc15-c3c5-46ed-9f5b-5dc9f6f9ccbd 0xc0047b6e50 0xc0047b6e51}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.813: INFO: Pod "webserver-deployment-c7997dcc8-jk2xv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jk2xv webserver-deployment-c7997dcc8- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-c7997dcc8-jk2xv bd38b3b9-242e-4b8d-bab2-f27256ed6f45 37121 0 2020-01-04 14:56:44 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b06dfc15-c3c5-46ed-9f5b-5dc9f6f9ccbd 0xc0047b6f60 0xc0047b6f61}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-04 14:56:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.813: INFO: Pod "webserver-deployment-c7997dcc8-l5wbh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-l5wbh webserver-deployment-c7997dcc8- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-c7997dcc8-l5wbh 4895a131-5f29-467e-86f6-19c99a37bf2c 37185 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b06dfc15-c3c5-46ed-9f5b-5dc9f6f9ccbd 0xc0047b70d0 0xc0047b70d1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.813: INFO: Pod "webserver-deployment-c7997dcc8-swjtv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-swjtv webserver-deployment-c7997dcc8- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-c7997dcc8-swjtv c0b110cb-45dc-4348-9e0c-20522bf1a18b 37100 0 2020-01-04 14:56:44 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b06dfc15-c3c5-46ed-9f5b-5dc9f6f9ccbd 0xc0047b71f0 0xc0047b71f1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-04 14:56:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.813: INFO: Pod "webserver-deployment-c7997dcc8-vx667" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vx667 webserver-deployment-c7997dcc8- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-c7997dcc8-vx667 ea8d4375-2d0b-4b2a-83b6-477117c6611c 37174 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b06dfc15-c3c5-46ed-9f5b-5dc9f6f9ccbd 0xc0047b7350 0xc0047b7351}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.813: INFO: Pod "webserver-deployment-c7997dcc8-wbpjc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wbpjc webserver-deployment-c7997dcc8- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-c7997dcc8-wbpjc c64e3974-094d-41ca-8c5d-218f78f9f2b5 37189 0 2020-01-04 14:56:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b06dfc15-c3c5-46ed-9f5b-5dc9f6f9ccbd 0xc0047b7470 0xc0047b7471}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 4 14:56:57.814: INFO: Pod "webserver-deployment-c7997dcc8-wj5th" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wj5th webserver-deployment-c7997dcc8- deployment-5580 /api/v1/namespaces/deployment-5580/pods/webserver-deployment-c7997dcc8-wj5th 0948158c-ae8d-4228-ab32-6eb23e54fbf1 37104 0 2020-01-04 14:56:44 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 b06dfc15-c3c5-46ed-9f5b-5dc9f6f9ccbd 0xc0047b7580 0xc0047b7581}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz5vj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz5vj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz5vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 14:56:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-04 14:56:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:56:57.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5580" for this suite. • [SLOW TEST:38.607 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":155,"skipped":2541,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:56:59.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 4 14:57:01.356: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 4 14:57:03.371: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:06.328: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:09.188: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:10.285: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:12.010: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:14.066: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:15.947: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:18.771: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:20.680: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:22.709: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:24.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:25.546: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:27.882: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:29.984: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:31.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:34.261: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:35.466: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:37.973: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:41.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:42.393: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:44.555: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:46.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:49.479: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:51.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:55.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:55.956: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:57.645: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:57:59.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:58:01.384: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:58:03.378: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:58:05.376: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:58:07.379: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:58:09.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:58:11.380: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746621, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 4 14:58:14.439: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:58:14.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9287" for this suite. STEP: Destroying namespace "webhook-9287-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:75.373 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":156,"skipped":2565,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:58:14.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-f492 STEP: Creating a pod to test atomic-volume-subpath Jan 4 14:58:14.937: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-f492" in namespace "subpath-9759" to be "success or failure" Jan 4 14:58:14.956: INFO: Pod "pod-subpath-test-projected-f492": Phase="Pending", Reason="", readiness=false. Elapsed: 19.176544ms Jan 4 14:58:16.961: INFO: Pod "pod-subpath-test-projected-f492": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024614356s Jan 4 14:58:18.967: INFO: Pod "pod-subpath-test-projected-f492": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030392731s Jan 4 14:58:21.063: INFO: Pod "pod-subpath-test-projected-f492": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12679085s Jan 4 14:58:23.084: INFO: Pod "pod-subpath-test-projected-f492": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147194543s Jan 4 14:58:25.096: INFO: Pod "pod-subpath-test-projected-f492": Phase="Pending", Reason="", readiness=false. Elapsed: 10.15972579s Jan 4 14:58:27.104: INFO: Pod "pod-subpath-test-projected-f492": Phase="Running", Reason="", readiness=true. Elapsed: 12.167389164s Jan 4 14:58:29.109: INFO: Pod "pod-subpath-test-projected-f492": Phase="Running", Reason="", readiness=true. Elapsed: 14.172665006s Jan 4 14:58:31.116: INFO: Pod "pod-subpath-test-projected-f492": Phase="Running", Reason="", readiness=true. Elapsed: 16.179213015s Jan 4 14:58:33.123: INFO: Pod "pod-subpath-test-projected-f492": Phase="Running", Reason="", readiness=true. Elapsed: 18.185920166s Jan 4 14:58:35.132: INFO: Pod "pod-subpath-test-projected-f492": Phase="Running", Reason="", readiness=true. Elapsed: 20.195350178s Jan 4 14:58:37.138: INFO: Pod "pod-subpath-test-projected-f492": Phase="Running", Reason="", readiness=true. Elapsed: 22.2009233s Jan 4 14:58:39.142: INFO: Pod "pod-subpath-test-projected-f492": Phase="Running", Reason="", readiness=true. Elapsed: 24.205268173s Jan 4 14:58:41.149: INFO: Pod "pod-subpath-test-projected-f492": Phase="Running", Reason="", readiness=true. Elapsed: 26.212366909s Jan 4 14:58:43.156: INFO: Pod "pod-subpath-test-projected-f492": Phase="Running", Reason="", readiness=true. Elapsed: 28.219755797s Jan 4 14:58:45.172: INFO: Pod "pod-subpath-test-projected-f492": Phase="Running", Reason="", readiness=true. Elapsed: 30.235807116s Jan 4 14:58:47.180: INFO: Pod "pod-subpath-test-projected-f492": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.243073948s STEP: Saw pod success Jan 4 14:58:47.180: INFO: Pod "pod-subpath-test-projected-f492" satisfied condition "success or failure" Jan 4 14:58:47.184: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-f492 container test-container-subpath-projected-f492: STEP: delete the pod Jan 4 14:58:47.385: INFO: Waiting for pod pod-subpath-test-projected-f492 to disappear Jan 4 14:58:47.391: INFO: Pod pod-subpath-test-projected-f492 no longer exists STEP: Deleting pod pod-subpath-test-projected-f492 Jan 4 14:58:47.391: INFO: Deleting pod "pod-subpath-test-projected-f492" in namespace "subpath-9759" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:58:47.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9759" for this suite. • [SLOW TEST:32.585 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":157,"skipped":2568,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:58:47.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 4 14:58:47.551: INFO: Waiting up to 5m0s for pod "downward-api-896d83c4-2297-441d-bf4a-8c482e83d85d" in namespace "downward-api-9636" to be "success or failure" Jan 4 14:58:47.585: INFO: Pod "downward-api-896d83c4-2297-441d-bf4a-8c482e83d85d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.442051ms Jan 4 14:58:49.592: INFO: Pod "downward-api-896d83c4-2297-441d-bf4a-8c482e83d85d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04134953s Jan 4 14:58:51.600: INFO: Pod "downward-api-896d83c4-2297-441d-bf4a-8c482e83d85d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048850029s Jan 4 14:58:53.609: INFO: Pod "downward-api-896d83c4-2297-441d-bf4a-8c482e83d85d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058008679s Jan 4 14:58:55.615: INFO: Pod "downward-api-896d83c4-2297-441d-bf4a-8c482e83d85d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06348248s Jan 4 14:58:57.619: INFO: Pod "downward-api-896d83c4-2297-441d-bf4a-8c482e83d85d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06837421s STEP: Saw pod success Jan 4 14:58:57.620: INFO: Pod "downward-api-896d83c4-2297-441d-bf4a-8c482e83d85d" satisfied condition "success or failure" Jan 4 14:58:57.622: INFO: Trying to get logs from node jerma-node pod downward-api-896d83c4-2297-441d-bf4a-8c482e83d85d container dapi-container: STEP: delete the pod Jan 4 14:58:57.687: INFO: Waiting for pod downward-api-896d83c4-2297-441d-bf4a-8c482e83d85d to disappear Jan 4 14:58:57.697: INFO: Pod downward-api-896d83c4-2297-441d-bf4a-8c482e83d85d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:58:57.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9636" for this suite. • [SLOW TEST:10.301 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2575,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:58:57.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 4 14:58:57.859: INFO: Waiting up to 5m0s for pod "pod-a6009ec8-0afe-4d85-be21-1ae64778e7b9" in namespace "emptydir-3567" to be "success or failure" Jan 4 14:58:57.875: INFO: Pod "pod-a6009ec8-0afe-4d85-be21-1ae64778e7b9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.16483ms Jan 4 14:58:59.895: INFO: Pod "pod-a6009ec8-0afe-4d85-be21-1ae64778e7b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03593493s Jan 4 14:59:01.903: INFO: Pod "pod-a6009ec8-0afe-4d85-be21-1ae64778e7b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044525438s Jan 4 14:59:03.911: INFO: Pod "pod-a6009ec8-0afe-4d85-be21-1ae64778e7b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052090992s Jan 4 14:59:05.918: INFO: Pod "pod-a6009ec8-0afe-4d85-be21-1ae64778e7b9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059413845s Jan 4 14:59:07.932: INFO: Pod "pod-a6009ec8-0afe-4d85-be21-1ae64778e7b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073450752s STEP: Saw pod success Jan 4 14:59:07.932: INFO: Pod "pod-a6009ec8-0afe-4d85-be21-1ae64778e7b9" satisfied condition "success or failure" Jan 4 14:59:07.937: INFO: Trying to get logs from node jerma-node pod pod-a6009ec8-0afe-4d85-be21-1ae64778e7b9 container test-container: STEP: delete the pod Jan 4 14:59:07.970: INFO: Waiting for pod pod-a6009ec8-0afe-4d85-be21-1ae64778e7b9 to disappear Jan 4 14:59:07.981: INFO: Pod pod-a6009ec8-0afe-4d85-be21-1ae64778e7b9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:59:07.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3567" for this suite. • [SLOW TEST:10.323 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2588,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:59:08.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 4 14:59:08.169: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2528 /api/v1/namespaces/watch-2528/configmaps/e2e-watch-test-resource-version 043cf6e2-2c7b-4eea-8d9d-dd0287c5aecd 37863 0 2020-01-04 14:59:08 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 4 14:59:08.169: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2528 /api/v1/namespaces/watch-2528/configmaps/e2e-watch-test-resource-version 043cf6e2-2c7b-4eea-8d9d-dd0287c5aecd 37864 0 2020-01-04 14:59:08 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:59:08.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2528" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":160,"skipped":2591,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:59:08.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 4 14:59:18.490: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:59:18.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8813" for this suite. • [SLOW TEST:10.488 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2597,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:59:18.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-lffn STEP: Creating a pod to test atomic-volume-subpath Jan 4 14:59:19.010: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lffn" in namespace "subpath-236" to be "success or failure" Jan 4 14:59:19.071: INFO: Pod "pod-subpath-test-configmap-lffn": Phase="Pending", Reason="", readiness=false. Elapsed: 61.323124ms Jan 4 14:59:21.078: INFO: Pod "pod-subpath-test-configmap-lffn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067712974s Jan 4 14:59:23.083: INFO: Pod "pod-subpath-test-configmap-lffn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072596383s Jan 4 14:59:25.090: INFO: Pod "pod-subpath-test-configmap-lffn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080206755s Jan 4 14:59:27.108: INFO: Pod "pod-subpath-test-configmap-lffn": Phase="Running", Reason="", readiness=true. Elapsed: 8.097897422s Jan 4 14:59:29.113: INFO: Pod "pod-subpath-test-configmap-lffn": Phase="Running", Reason="", readiness=true. Elapsed: 10.102719429s Jan 4 14:59:31.126: INFO: Pod "pod-subpath-test-configmap-lffn": Phase="Running", Reason="", readiness=true. Elapsed: 12.116048416s Jan 4 14:59:33.134: INFO: Pod "pod-subpath-test-configmap-lffn": Phase="Running", Reason="", readiness=true. Elapsed: 14.123392677s Jan 4 14:59:35.146: INFO: Pod "pod-subpath-test-configmap-lffn": Phase="Running", Reason="", readiness=true. Elapsed: 16.136306225s Jan 4 14:59:37.153: INFO: Pod "pod-subpath-test-configmap-lffn": Phase="Running", Reason="", readiness=true. Elapsed: 18.142532251s Jan 4 14:59:39.162: INFO: Pod "pod-subpath-test-configmap-lffn": Phase="Running", Reason="", readiness=true. Elapsed: 20.151923798s Jan 4 14:59:41.169: INFO: Pod "pod-subpath-test-configmap-lffn": Phase="Running", Reason="", readiness=true. Elapsed: 22.158766814s Jan 4 14:59:43.183: INFO: Pod "pod-subpath-test-configmap-lffn": Phase="Running", Reason="", readiness=true. Elapsed: 24.17308291s Jan 4 14:59:45.188: INFO: Pod "pod-subpath-test-configmap-lffn": Phase="Running", Reason="", readiness=true. Elapsed: 26.177921453s Jan 4 14:59:47.194: INFO: Pod "pod-subpath-test-configmap-lffn": Phase="Running", Reason="", readiness=true. Elapsed: 28.183877577s Jan 4 14:59:49.201: INFO: Pod "pod-subpath-test-configmap-lffn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.191206719s STEP: Saw pod success Jan 4 14:59:49.201: INFO: Pod "pod-subpath-test-configmap-lffn" satisfied condition "success or failure" Jan 4 14:59:49.206: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-lffn container test-container-subpath-configmap-lffn: STEP: delete the pod Jan 4 14:59:49.267: INFO: Waiting for pod pod-subpath-test-configmap-lffn to disappear Jan 4 14:59:49.296: INFO: Pod pod-subpath-test-configmap-lffn no longer exists STEP: Deleting pod pod-subpath-test-configmap-lffn Jan 4 14:59:49.297: INFO: Deleting pod "pod-subpath-test-configmap-lffn" in namespace "subpath-236" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:59:49.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-236" for this suite. • [SLOW TEST:30.645 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":162,"skipped":2607,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:59:49.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 4 14:59:49.887: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 4 14:59:51.901: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746789, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746789, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746790, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746789, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 14:59:53.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746789, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746789, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746790, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713746789, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 4 14:59:56.942: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 14:59:57.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7783" for this suite. STEP: Destroying namespace "webhook-7783-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.102 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":163,"skipped":2609,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 14:59:57.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 4 14:59:57.641: INFO: Waiting up to 5m0s for pod "downwardapi-volume-309b393c-be80-4ed4-ab5f-54798745c63c" in namespace "projected-5756" to be "success or failure" Jan 4 14:59:57.747: INFO: Pod "downwardapi-volume-309b393c-be80-4ed4-ab5f-54798745c63c": Phase="Pending", Reason="", readiness=false. Elapsed: 105.562235ms Jan 4 14:59:59.752: INFO: Pod "downwardapi-volume-309b393c-be80-4ed4-ab5f-54798745c63c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110781155s Jan 4 15:00:01.757: INFO: Pod "downwardapi-volume-309b393c-be80-4ed4-ab5f-54798745c63c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115838169s Jan 4 15:00:03.763: INFO: Pod "downwardapi-volume-309b393c-be80-4ed4-ab5f-54798745c63c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121574306s Jan 4 15:00:05.769: INFO: Pod "downwardapi-volume-309b393c-be80-4ed4-ab5f-54798745c63c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127899041s Jan 4 15:00:07.776: INFO: Pod "downwardapi-volume-309b393c-be80-4ed4-ab5f-54798745c63c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.134729122s Jan 4 15:00:09.786: INFO: Pod "downwardapi-volume-309b393c-be80-4ed4-ab5f-54798745c63c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.145463152s STEP: Saw pod success Jan 4 15:00:09.787: INFO: Pod "downwardapi-volume-309b393c-be80-4ed4-ab5f-54798745c63c" satisfied condition "success or failure" Jan 4 15:00:09.797: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-309b393c-be80-4ed4-ab5f-54798745c63c container client-container: STEP: delete the pod Jan 4 15:00:09.935: INFO: Waiting for pod downwardapi-volume-309b393c-be80-4ed4-ab5f-54798745c63c to disappear Jan 4 15:00:09.946: INFO: Pod downwardapi-volume-309b393c-be80-4ed4-ab5f-54798745c63c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:00:09.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5756" for this suite. • [SLOW TEST:12.541 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2628,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:00:09.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 15:00:10.073: INFO: Waiting up to 5m0s for pod "busybox-user-65534-e87e4539-9ae3-44c2-b46f-1e9ebbd25047" in namespace "security-context-test-9138" to be "success or failure" Jan 4 15:00:10.082: INFO: Pod "busybox-user-65534-e87e4539-9ae3-44c2-b46f-1e9ebbd25047": Phase="Pending", Reason="", readiness=false. Elapsed: 9.233147ms Jan 4 15:00:12.089: INFO: Pod "busybox-user-65534-e87e4539-9ae3-44c2-b46f-1e9ebbd25047": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015841074s Jan 4 15:00:14.094: INFO: Pod "busybox-user-65534-e87e4539-9ae3-44c2-b46f-1e9ebbd25047": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020345104s Jan 4 15:00:16.099: INFO: Pod "busybox-user-65534-e87e4539-9ae3-44c2-b46f-1e9ebbd25047": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025547558s Jan 4 15:00:18.104: INFO: Pod "busybox-user-65534-e87e4539-9ae3-44c2-b46f-1e9ebbd25047": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.031169988s Jan 4 15:00:18.104: INFO: Pod "busybox-user-65534-e87e4539-9ae3-44c2-b46f-1e9ebbd25047" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:00:18.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9138" for this suite. • [SLOW TEST:8.163 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2630,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:00:18.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 15:00:18.406: INFO: Create a RollingUpdate DaemonSet Jan 4 15:00:18.412: INFO: Check that daemon pods launch on every node of the cluster Jan 4 15:00:18.422: INFO: Number of nodes with available pods: 0 Jan 4 15:00:18.422: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:00:20.006: INFO: Number of nodes with available pods: 0 Jan 4 15:00:20.006: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:00:20.650: INFO: Number of nodes with available pods: 0 Jan 4 15:00:20.650: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:00:21.442: INFO: Number of nodes with available pods: 0 Jan 4 15:00:21.443: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:00:22.446: INFO: Number of nodes with available pods: 0 Jan 4 15:00:22.447: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:00:24.598: INFO: Number of nodes with available pods: 0 Jan 4 15:00:24.598: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:00:25.532: INFO: Number of nodes with available pods: 0 Jan 4 15:00:25.533: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:00:26.698: INFO: Number of nodes with available pods: 0 Jan 4 15:00:26.698: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:00:27.498: INFO: Number of nodes with available pods: 0 Jan 4 15:00:27.498: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:00:28.436: INFO: Number of nodes with available pods: 1 Jan 4 15:00:28.436: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 4 15:00:29.435: INFO: Number of nodes with available pods: 2 Jan 4 15:00:29.435: INFO: Number of running nodes: 2, number of available pods: 2 Jan 4 15:00:29.435: INFO: Update the DaemonSet to trigger a rollout Jan 4 15:00:29.445: INFO: Updating DaemonSet daemon-set Jan 4 15:00:36.478: INFO: Roll back the DaemonSet before rollout is complete Jan 4 15:00:36.490: INFO: Updating DaemonSet daemon-set Jan 4 15:00:36.490: INFO: Make sure DaemonSet rollback is complete Jan 4 15:00:36.954: INFO: Wrong image for pod: daemon-set-dw6zc. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 4 15:00:36.954: INFO: Pod daemon-set-dw6zc is not available Jan 4 15:00:38.270: INFO: Wrong image for pod: daemon-set-dw6zc. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 4 15:00:38.270: INFO: Pod daemon-set-dw6zc is not available Jan 4 15:00:38.968: INFO: Wrong image for pod: daemon-set-dw6zc. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 4 15:00:38.968: INFO: Pod daemon-set-dw6zc is not available Jan 4 15:00:39.995: INFO: Wrong image for pod: daemon-set-dw6zc. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 4 15:00:39.995: INFO: Pod daemon-set-dw6zc is not available Jan 4 15:00:40.968: INFO: Wrong image for pod: daemon-set-dw6zc. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 4 15:00:40.968: INFO: Pod daemon-set-dw6zc is not available Jan 4 15:00:42.311: INFO: Pod daemon-set-w2dqg is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8083, will wait for the garbage collector to delete the pods Jan 4 15:00:42.389: INFO: Deleting DaemonSet.extensions daemon-set took: 5.962054ms Jan 4 15:00:43.489: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.100447799s Jan 4 15:00:53.194: INFO: Number of nodes with available pods: 0 Jan 4 15:00:53.194: INFO: Number of running nodes: 0, number of available pods: 0 Jan 4 15:00:53.203: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8083/daemonsets","resourceVersion":"38350"},"items":null} Jan 4 15:00:53.208: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8083/pods","resourceVersion":"38350"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:00:53.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8083" for this suite. • [SLOW TEST:35.116 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":166,"skipped":2643,"failed":0} SS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:00:53.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jan 4 15:00:53.316: INFO: Created pod &Pod{ObjectMeta:{dns-611 dns-611 /api/v1/namespaces/dns-611/pods/dns-611 70d93442-a22d-469c-95ce-2823bf6b75c3 38356 0 2020-01-04 15:00:53 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7pncg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7pncg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7pncg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Jan 4 15:01:01.361: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-611 PodName:dns-611 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 15:01:01.361: INFO: >>> kubeConfig: /root/.kube/config I0104 15:01:01.449551 9 log.go:172] (0xc002597b80) (0xc00112abe0) Create stream I0104 15:01:01.449617 9 log.go:172] (0xc002597b80) (0xc00112abe0) Stream added, broadcasting: 1 I0104 15:01:01.454706 9 log.go:172] (0xc002597b80) Reply frame received for 1 I0104 15:01:01.454748 9 log.go:172] (0xc002597b80) (0xc0015cb720) Create stream I0104 15:01:01.454757 9 log.go:172] (0xc002597b80) (0xc0015cb720) Stream added, broadcasting: 3 I0104 15:01:01.456482 9 log.go:172] (0xc002597b80) Reply frame received for 3 I0104 15:01:01.456515 9 log.go:172] (0xc002597b80) (0xc00112ad20) Create stream I0104 15:01:01.456527 9 log.go:172] (0xc002597b80) (0xc00112ad20) Stream added, broadcasting: 5 I0104 15:01:01.458614 9 log.go:172] (0xc002597b80) Reply frame received for 5 I0104 15:01:01.576927 9 log.go:172] (0xc002597b80) Data frame received for 3 I0104 15:01:01.576970 9 log.go:172] (0xc0015cb720) (3) Data frame handling I0104 15:01:01.576993 9 log.go:172] (0xc0015cb720) (3) Data frame sent I0104 15:01:01.665765 9 log.go:172] (0xc002597b80) Data frame received for 1 I0104 15:01:01.665890 9 log.go:172] (0xc00112abe0) (1) Data frame handling I0104 15:01:01.665912 9 log.go:172] (0xc00112abe0) (1) Data frame sent I0104 15:01:01.665936 9 log.go:172] (0xc002597b80) (0xc00112abe0) Stream removed, broadcasting: 1 I0104 15:01:01.668398 9 log.go:172] (0xc002597b80) (0xc0015cb720) Stream removed, broadcasting: 3 I0104 15:01:01.668652 9 log.go:172] (0xc002597b80) (0xc00112ad20) Stream removed, broadcasting: 5 I0104 15:01:01.668690 9 log.go:172] (0xc002597b80) (0xc00112abe0) Stream removed, broadcasting: 1 I0104 15:01:01.668704 9 log.go:172] (0xc002597b80) (0xc0015cb720) Stream removed, broadcasting: 3 I0104 15:01:01.668712 9 log.go:172] (0xc002597b80) (0xc00112ad20) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Jan 4 15:01:01.669: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-611 PodName:dns-611 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 15:01:01.669: INFO: >>> kubeConfig: /root/.kube/config I0104 15:01:01.671711 9 log.go:172] (0xc002597b80) Go away received I0104 15:01:01.712853 9 log.go:172] (0xc005c9a580) (0xc0015cba40) Create stream I0104 15:01:01.712933 9 log.go:172] (0xc005c9a580) (0xc0015cba40) Stream added, broadcasting: 1 I0104 15:01:01.717213 9 log.go:172] (0xc005c9a580) Reply frame received for 1 I0104 15:01:01.717238 9 log.go:172] (0xc005c9a580) (0xc0015cbb80) Create stream I0104 15:01:01.717245 9 log.go:172] (0xc005c9a580) (0xc0015cbb80) Stream added, broadcasting: 3 I0104 15:01:01.718512 9 log.go:172] (0xc005c9a580) Reply frame received for 3 I0104 15:01:01.718534 9 log.go:172] (0xc005c9a580) (0xc00112afa0) Create stream I0104 15:01:01.718542 9 log.go:172] (0xc005c9a580) (0xc00112afa0) Stream added, broadcasting: 5 I0104 15:01:01.719944 9 log.go:172] (0xc005c9a580) Reply frame received for 5 I0104 15:01:01.819589 9 log.go:172] (0xc005c9a580) Data frame received for 3 I0104 15:01:01.819873 9 log.go:172] (0xc0015cbb80) (3) Data frame handling I0104 15:01:01.819936 9 log.go:172] (0xc0015cbb80) (3) Data frame sent I0104 15:01:01.926219 9 log.go:172] (0xc005c9a580) Data frame received for 1 I0104 15:01:01.926345 9 log.go:172] (0xc005c9a580) (0xc0015cbb80) Stream removed, broadcasting: 3 I0104 15:01:01.926405 9 log.go:172] (0xc0015cba40) (1) Data frame handling I0104 15:01:01.926413 9 log.go:172] (0xc0015cba40) (1) Data frame sent I0104 15:01:01.926418 9 log.go:172] (0xc005c9a580) (0xc0015cba40) Stream removed, broadcasting: 1 I0104 15:01:01.926540 9 log.go:172] (0xc005c9a580) (0xc00112afa0) Stream removed, broadcasting: 5 I0104 15:01:01.926630 9 log.go:172] (0xc005c9a580) Go away received I0104 15:01:01.926668 9 log.go:172] (0xc005c9a580) (0xc0015cba40) Stream removed, broadcasting: 1 I0104 15:01:01.926682 9 log.go:172] (0xc005c9a580) (0xc0015cbb80) Stream removed, broadcasting: 3 I0104 15:01:01.926690 9 log.go:172] (0xc005c9a580) (0xc00112afa0) Stream removed, broadcasting: 5 Jan 4 15:01:01.926: INFO: Deleting pod dns-611... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:01:01.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-611" for this suite. • [SLOW TEST:8.747 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":167,"skipped":2645,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:01:01.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:01:09.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7487" for this suite. • [SLOW TEST:7.216 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":168,"skipped":2661,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:01:09.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-lfdx STEP: Creating a pod to test atomic-volume-subpath Jan 4 15:01:09.425: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-lfdx" in namespace "subpath-5816" to be "success or failure" Jan 4 15:01:09.493: INFO: Pod "pod-subpath-test-secret-lfdx": Phase="Pending", Reason="", readiness=false. Elapsed: 68.308032ms Jan 4 15:01:11.500: INFO: Pod "pod-subpath-test-secret-lfdx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074895543s Jan 4 15:01:13.505: INFO: Pod "pod-subpath-test-secret-lfdx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080473447s Jan 4 15:01:15.513: INFO: Pod "pod-subpath-test-secret-lfdx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088016759s Jan 4 15:01:17.695: INFO: Pod "pod-subpath-test-secret-lfdx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.269895953s Jan 4 15:01:19.702: INFO: Pod "pod-subpath-test-secret-lfdx": Phase="Running", Reason="", readiness=true. Elapsed: 10.27735635s Jan 4 15:01:21.710: INFO: Pod "pod-subpath-test-secret-lfdx": Phase="Running", Reason="", readiness=true. Elapsed: 12.285020691s Jan 4 15:01:23.717: INFO: Pod "pod-subpath-test-secret-lfdx": Phase="Running", Reason="", readiness=true. Elapsed: 14.291726505s Jan 4 15:01:25.725: INFO: Pod "pod-subpath-test-secret-lfdx": Phase="Running", Reason="", readiness=true. Elapsed: 16.299512414s Jan 4 15:01:27.769: INFO: Pod "pod-subpath-test-secret-lfdx": Phase="Running", Reason="", readiness=true. Elapsed: 18.343594288s Jan 4 15:01:29.778: INFO: Pod "pod-subpath-test-secret-lfdx": Phase="Running", Reason="", readiness=true. Elapsed: 20.352757026s Jan 4 15:01:31.791: INFO: Pod "pod-subpath-test-secret-lfdx": Phase="Running", Reason="", readiness=true. Elapsed: 22.366377079s Jan 4 15:01:33.800: INFO: Pod "pod-subpath-test-secret-lfdx": Phase="Running", Reason="", readiness=true. Elapsed: 24.375082762s Jan 4 15:01:35.807: INFO: Pod "pod-subpath-test-secret-lfdx": Phase="Running", Reason="", readiness=true. Elapsed: 26.381795437s Jan 4 15:01:37.813: INFO: Pod "pod-subpath-test-secret-lfdx": Phase="Running", Reason="", readiness=true. Elapsed: 28.388312606s Jan 4 15:01:39.818: INFO: Pod "pod-subpath-test-secret-lfdx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.393092458s STEP: Saw pod success Jan 4 15:01:39.818: INFO: Pod "pod-subpath-test-secret-lfdx" satisfied condition "success or failure" Jan 4 15:01:39.821: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-lfdx container test-container-subpath-secret-lfdx: STEP: delete the pod Jan 4 15:01:39.885: INFO: Waiting for pod pod-subpath-test-secret-lfdx to disappear Jan 4 15:01:39.904: INFO: Pod pod-subpath-test-secret-lfdx no longer exists STEP: Deleting pod pod-subpath-test-secret-lfdx Jan 4 15:01:39.904: INFO: Deleting pod "pod-subpath-test-secret-lfdx" in namespace "subpath-5816" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:01:39.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5816" for this suite. • [SLOW TEST:30.713 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":169,"skipped":2674,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:01:39.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-87a12a56-0cb9-419a-ba40-f07c6e3235a7 STEP: Creating a pod to test consume configMaps Jan 4 15:01:40.076: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cfe26c3e-0688-4080-b815-4a8403d6cc5e" in namespace "projected-4471" to be "success or failure" Jan 4 15:01:40.104: INFO: Pod "pod-projected-configmaps-cfe26c3e-0688-4080-b815-4a8403d6cc5e": Phase="Pending", Reason="", readiness=false. Elapsed: 28.169371ms Jan 4 15:01:42.110: INFO: Pod "pod-projected-configmaps-cfe26c3e-0688-4080-b815-4a8403d6cc5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034353243s Jan 4 15:01:44.118: INFO: Pod "pod-projected-configmaps-cfe26c3e-0688-4080-b815-4a8403d6cc5e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041896364s Jan 4 15:01:46.131: INFO: Pod "pod-projected-configmaps-cfe26c3e-0688-4080-b815-4a8403d6cc5e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055341442s Jan 4 15:01:48.137: INFO: Pod "pod-projected-configmaps-cfe26c3e-0688-4080-b815-4a8403d6cc5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060947872s STEP: Saw pod success Jan 4 15:01:48.137: INFO: Pod "pod-projected-configmaps-cfe26c3e-0688-4080-b815-4a8403d6cc5e" satisfied condition "success or failure" Jan 4 15:01:48.141: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-cfe26c3e-0688-4080-b815-4a8403d6cc5e container projected-configmap-volume-test: STEP: delete the pod Jan 4 15:01:48.168: INFO: Waiting for pod pod-projected-configmaps-cfe26c3e-0688-4080-b815-4a8403d6cc5e to disappear Jan 4 15:01:48.231: INFO: Pod pod-projected-configmaps-cfe26c3e-0688-4080-b815-4a8403d6cc5e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:01:48.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4471" for this suite. • [SLOW TEST:8.328 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2731,"failed":0} [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:01:48.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jan 4 15:02:02.992: INFO: Successfully updated pod "annotationupdatec16d450c-77b3-4dd1-ab6f-336bad4ff6e1" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:02:05.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1689" for this suite. • [SLOW TEST:16.842 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2731,"failed":0} S ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:02:05.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 15:02:05.376: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:02:21.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5837" for this suite. • [SLOW TEST:16.571 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2732,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:02:21.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-0c0612a9-9676-45c7-b4ed-0fce4e0d0cab STEP: Creating a pod to test consume configMaps Jan 4 15:02:21.783: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ec13c901-d54c-48de-bcf5-62798b4f2eac" in namespace "projected-852" to be "success or failure" Jan 4 15:02:21.802: INFO: Pod "pod-projected-configmaps-ec13c901-d54c-48de-bcf5-62798b4f2eac": Phase="Pending", Reason="", readiness=false. Elapsed: 18.45428ms Jan 4 15:02:23.811: INFO: Pod "pod-projected-configmaps-ec13c901-d54c-48de-bcf5-62798b4f2eac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027459732s Jan 4 15:02:25.818: INFO: Pod "pod-projected-configmaps-ec13c901-d54c-48de-bcf5-62798b4f2eac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034708837s Jan 4 15:02:27.855: INFO: Pod "pod-projected-configmaps-ec13c901-d54c-48de-bcf5-62798b4f2eac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071669521s Jan 4 15:02:29.861: INFO: Pod "pod-projected-configmaps-ec13c901-d54c-48de-bcf5-62798b4f2eac": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078372504s Jan 4 15:02:31.880: INFO: Pod "pod-projected-configmaps-ec13c901-d54c-48de-bcf5-62798b4f2eac": Phase="Pending", Reason="", readiness=false. Elapsed: 10.097227019s Jan 4 15:02:33.892: INFO: Pod "pod-projected-configmaps-ec13c901-d54c-48de-bcf5-62798b4f2eac": Phase="Running", Reason="", readiness=true. Elapsed: 12.108524865s Jan 4 15:02:35.898: INFO: Pod "pod-projected-configmaps-ec13c901-d54c-48de-bcf5-62798b4f2eac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.115075865s STEP: Saw pod success Jan 4 15:02:35.898: INFO: Pod "pod-projected-configmaps-ec13c901-d54c-48de-bcf5-62798b4f2eac" satisfied condition "success or failure" Jan 4 15:02:35.901: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-ec13c901-d54c-48de-bcf5-62798b4f2eac container projected-configmap-volume-test: STEP: delete the pod Jan 4 15:02:35.941: INFO: Waiting for pod pod-projected-configmaps-ec13c901-d54c-48de-bcf5-62798b4f2eac to disappear Jan 4 15:02:36.051: INFO: Pod pod-projected-configmaps-ec13c901-d54c-48de-bcf5-62798b4f2eac no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:02:36.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-852" for this suite. • [SLOW TEST:14.413 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2733,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:02:36.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 15:02:36.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jan 4 15:02:36.360: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-04T15:02:36Z generation:1 name:name1 resourceVersion:38783 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a836e6d2-5d55-4e82-b6cf-04479f5d6ba0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jan 4 15:02:46.369: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-04T15:02:46Z generation:1 name:name2 resourceVersion:38815 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:bb7d9db3-fd63-4650-b087-3498e75e24e9] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jan 4 15:02:56.384: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-04T15:02:36Z generation:2 name:name1 resourceVersion:38839 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a836e6d2-5d55-4e82-b6cf-04479f5d6ba0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jan 4 15:03:06.392: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-04T15:02:46Z generation:2 name:name2 resourceVersion:38865 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:bb7d9db3-fd63-4650-b087-3498e75e24e9] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jan 4 15:03:16.405: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-04T15:02:36Z generation:2 name:name1 resourceVersion:38891 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a836e6d2-5d55-4e82-b6cf-04479f5d6ba0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jan 4 15:03:26.423: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-04T15:02:46Z generation:2 name:name2 resourceVersion:38917 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:bb7d9db3-fd63-4650-b087-3498e75e24e9] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:03:36.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5134" for this suite. • [SLOW TEST:60.887 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":174,"skipped":2750,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:03:36.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 4 15:03:37.014: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ffd99669-96fd-4090-8519-4ba86e2ec92b" in namespace "downward-api-7515" to be "success or failure" Jan 4 15:03:37.031: INFO: Pod "downwardapi-volume-ffd99669-96fd-4090-8519-4ba86e2ec92b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.53763ms Jan 4 15:03:39.046: INFO: Pod "downwardapi-volume-ffd99669-96fd-4090-8519-4ba86e2ec92b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031791383s Jan 4 15:03:41.053: INFO: Pod "downwardapi-volume-ffd99669-96fd-4090-8519-4ba86e2ec92b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03890108s Jan 4 15:03:43.061: INFO: Pod "downwardapi-volume-ffd99669-96fd-4090-8519-4ba86e2ec92b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04692196s Jan 4 15:03:45.066: INFO: Pod "downwardapi-volume-ffd99669-96fd-4090-8519-4ba86e2ec92b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051660368s Jan 4 15:03:47.073: INFO: Pod "downwardapi-volume-ffd99669-96fd-4090-8519-4ba86e2ec92b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059255282s STEP: Saw pod success Jan 4 15:03:47.073: INFO: Pod "downwardapi-volume-ffd99669-96fd-4090-8519-4ba86e2ec92b" satisfied condition "success or failure" Jan 4 15:03:47.078: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ffd99669-96fd-4090-8519-4ba86e2ec92b container client-container: STEP: delete the pod Jan 4 15:03:47.198: INFO: Waiting for pod downwardapi-volume-ffd99669-96fd-4090-8519-4ba86e2ec92b to disappear Jan 4 15:03:47.202: INFO: Pod downwardapi-volume-ffd99669-96fd-4090-8519-4ba86e2ec92b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:03:47.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7515" for this suite. • [SLOW TEST:10.257 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2757,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:03:47.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 4 15:03:58.790: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:03:58.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8893" for this suite. • [SLOW TEST:11.648 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2764,"failed":0} [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:03:58.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 4 15:03:59.018: INFO: Waiting up to 5m0s for pod "downwardapi-volume-21228bbe-c230-4edf-948f-84b655253dec" in namespace "downward-api-9331" to be "success or failure" Jan 4 15:03:59.136: INFO: Pod "downwardapi-volume-21228bbe-c230-4edf-948f-84b655253dec": Phase="Pending", Reason="", readiness=false. Elapsed: 118.219887ms Jan 4 15:04:01.142: INFO: Pod "downwardapi-volume-21228bbe-c230-4edf-948f-84b655253dec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124622414s Jan 4 15:04:03.201: INFO: Pod "downwardapi-volume-21228bbe-c230-4edf-948f-84b655253dec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18343962s Jan 4 15:04:05.208: INFO: Pod "downwardapi-volume-21228bbe-c230-4edf-948f-84b655253dec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190151177s Jan 4 15:04:07.255: INFO: Pod "downwardapi-volume-21228bbe-c230-4edf-948f-84b655253dec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.237482902s Jan 4 15:04:09.262: INFO: Pod "downwardapi-volume-21228bbe-c230-4edf-948f-84b655253dec": Phase="Pending", Reason="", readiness=false. Elapsed: 10.243826192s Jan 4 15:04:11.269: INFO: Pod "downwardapi-volume-21228bbe-c230-4edf-948f-84b655253dec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.251503072s STEP: Saw pod success Jan 4 15:04:11.270: INFO: Pod "downwardapi-volume-21228bbe-c230-4edf-948f-84b655253dec" satisfied condition "success or failure" Jan 4 15:04:11.274: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-21228bbe-c230-4edf-948f-84b655253dec container client-container: STEP: delete the pod Jan 4 15:04:11.307: INFO: Waiting for pod downwardapi-volume-21228bbe-c230-4edf-948f-84b655253dec to disappear Jan 4 15:04:11.312: INFO: Pod downwardapi-volume-21228bbe-c230-4edf-948f-84b655253dec no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:04:11.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9331" for this suite. • [SLOW TEST:12.465 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2764,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:04:11.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-qkt9 STEP: Creating a pod to test atomic-volume-subpath Jan 4 15:04:11.509: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-qkt9" in namespace "subpath-7409" to be "success or failure" Jan 4 15:04:11.605: INFO: Pod "pod-subpath-test-downwardapi-qkt9": Phase="Pending", Reason="", readiness=false. Elapsed: 95.807932ms Jan 4 15:04:13.612: INFO: Pod "pod-subpath-test-downwardapi-qkt9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102748563s Jan 4 15:04:15.620: INFO: Pod "pod-subpath-test-downwardapi-qkt9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110807118s Jan 4 15:04:17.624: INFO: Pod "pod-subpath-test-downwardapi-qkt9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115429365s Jan 4 15:04:19.634: INFO: Pod "pod-subpath-test-downwardapi-qkt9": Phase="Running", Reason="", readiness=true. Elapsed: 8.124884743s Jan 4 15:04:21.656: INFO: Pod "pod-subpath-test-downwardapi-qkt9": Phase="Running", Reason="", readiness=true. Elapsed: 10.147361932s Jan 4 15:04:23.663: INFO: Pod "pod-subpath-test-downwardapi-qkt9": Phase="Running", Reason="", readiness=true. Elapsed: 12.154178934s Jan 4 15:04:25.669: INFO: Pod "pod-subpath-test-downwardapi-qkt9": Phase="Running", Reason="", readiness=true. Elapsed: 14.160107187s Jan 4 15:04:27.674: INFO: Pod "pod-subpath-test-downwardapi-qkt9": Phase="Running", Reason="", readiness=true. Elapsed: 16.164924974s Jan 4 15:04:29.682: INFO: Pod "pod-subpath-test-downwardapi-qkt9": Phase="Running", Reason="", readiness=true. Elapsed: 18.173676053s Jan 4 15:04:31.689: INFO: Pod "pod-subpath-test-downwardapi-qkt9": Phase="Running", Reason="", readiness=true. Elapsed: 20.180212782s Jan 4 15:04:33.694: INFO: Pod "pod-subpath-test-downwardapi-qkt9": Phase="Running", Reason="", readiness=true. Elapsed: 22.185539802s Jan 4 15:04:35.705: INFO: Pod "pod-subpath-test-downwardapi-qkt9": Phase="Running", Reason="", readiness=true. Elapsed: 24.196644142s Jan 4 15:04:37.712: INFO: Pod "pod-subpath-test-downwardapi-qkt9": Phase="Running", Reason="", readiness=true. Elapsed: 26.203000438s Jan 4 15:04:39.715: INFO: Pod "pod-subpath-test-downwardapi-qkt9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.206307881s STEP: Saw pod success Jan 4 15:04:39.715: INFO: Pod "pod-subpath-test-downwardapi-qkt9" satisfied condition "success or failure" Jan 4 15:04:39.717: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-qkt9 container test-container-subpath-downwardapi-qkt9: STEP: delete the pod Jan 4 15:04:39.778: INFO: Waiting for pod pod-subpath-test-downwardapi-qkt9 to disappear Jan 4 15:04:39.858: INFO: Pod pod-subpath-test-downwardapi-qkt9 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-qkt9 Jan 4 15:04:39.858: INFO: Deleting pod "pod-subpath-test-downwardapi-qkt9" in namespace "subpath-7409" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:04:39.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7409" for this suite. • [SLOW TEST:28.603 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":178,"skipped":2786,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:04:39.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 4 15:04:41.155: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747080, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747080, loc:(*time.Location)(0x7d100a0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747080, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747080, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Jan 4 15:04:43.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747080, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747080, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747081, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747080, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 15:04:45.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747080, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747080, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747081, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747080, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 15:04:47.164: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747080, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747080, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747081, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747080, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 15:04:49.162: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747080, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747080, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747081, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747080, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 4 15:04:52.183: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:04:52.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5424" for this suite. STEP: Destroying namespace "webhook-5424-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.476 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":179,"skipped":2788,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:04:52.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-bzw4 STEP: Creating a pod to test atomic-volume-subpath Jan 4 15:04:52.712: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bzw4" in namespace "subpath-8171" to be "success or failure" Jan 4 15:04:52.730: INFO: Pod "pod-subpath-test-configmap-bzw4": Phase="Pending", Reason="", readiness=false. Elapsed: 17.516096ms Jan 4 15:04:54.745: INFO: Pod "pod-subpath-test-configmap-bzw4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031967977s Jan 4 15:04:56.753: INFO: Pod "pod-subpath-test-configmap-bzw4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040059216s Jan 4 15:04:58.760: INFO: Pod "pod-subpath-test-configmap-bzw4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047760296s Jan 4 15:05:00.768: INFO: Pod "pod-subpath-test-configmap-bzw4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054980788s Jan 4 15:05:02.771: INFO: Pod "pod-subpath-test-configmap-bzw4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.058567947s Jan 4 15:05:04.901: INFO: Pod "pod-subpath-test-configmap-bzw4": Phase="Running", Reason="", readiness=true. Elapsed: 12.188231742s Jan 4 15:05:06.914: INFO: Pod "pod-subpath-test-configmap-bzw4": Phase="Running", Reason="", readiness=true. Elapsed: 14.201610834s Jan 4 15:05:08.921: INFO: Pod "pod-subpath-test-configmap-bzw4": Phase="Running", Reason="", readiness=true. Elapsed: 16.208341305s Jan 4 15:05:10.931: INFO: Pod "pod-subpath-test-configmap-bzw4": Phase="Running", Reason="", readiness=true. Elapsed: 18.218469681s Jan 4 15:05:12.937: INFO: Pod "pod-subpath-test-configmap-bzw4": Phase="Running", Reason="", readiness=true. Elapsed: 20.224295129s Jan 4 15:05:14.944: INFO: Pod "pod-subpath-test-configmap-bzw4": Phase="Running", Reason="", readiness=true. Elapsed: 22.231645675s Jan 4 15:05:16.948: INFO: Pod "pod-subpath-test-configmap-bzw4": Phase="Running", Reason="", readiness=true. Elapsed: 24.235795279s Jan 4 15:05:18.955: INFO: Pod "pod-subpath-test-configmap-bzw4": Phase="Running", Reason="", readiness=true. Elapsed: 26.242475336s Jan 4 15:05:20.965: INFO: Pod "pod-subpath-test-configmap-bzw4": Phase="Running", Reason="", readiness=true. Elapsed: 28.252751921s Jan 4 15:05:22.973: INFO: Pod "pod-subpath-test-configmap-bzw4": Phase="Running", Reason="", readiness=true. Elapsed: 30.260572234s Jan 4 15:05:24.978: INFO: Pod "pod-subpath-test-configmap-bzw4": Phase="Running", Reason="", readiness=true. Elapsed: 32.265097697s Jan 4 15:05:26.992: INFO: Pod "pod-subpath-test-configmap-bzw4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.278925764s STEP: Saw pod success Jan 4 15:05:26.992: INFO: Pod "pod-subpath-test-configmap-bzw4" satisfied condition "success or failure" Jan 4 15:05:26.994: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-bzw4 container test-container-subpath-configmap-bzw4: STEP: delete the pod Jan 4 15:05:27.489: INFO: Waiting for pod pod-subpath-test-configmap-bzw4 to disappear Jan 4 15:05:27.496: INFO: Pod pod-subpath-test-configmap-bzw4 no longer exists STEP: Deleting pod pod-subpath-test-configmap-bzw4 Jan 4 15:05:27.496: INFO: Deleting pod "pod-subpath-test-configmap-bzw4" in namespace "subpath-8171" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:05:27.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8171" for this suite. • [SLOW TEST:35.114 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":180,"skipped":2800,"failed":0} S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:05:27.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-4d751624-5c81-49cd-a271-782da97297cd in namespace container-probe-8234 Jan 4 15:05:39.915: INFO: Started pod test-webserver-4d751624-5c81-49cd-a271-782da97297cd in namespace container-probe-8234 STEP: checking the pod's current state and verifying that restartCount is present Jan 4 15:05:39.921: INFO: Initial restart count of pod test-webserver-4d751624-5c81-49cd-a271-782da97297cd is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:09:41.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8234" for this suite. • [SLOW TEST:253.710 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2801,"failed":0} [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:09:41.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4423.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4423.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4423.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4423.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-4423.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4423.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 4 15:09:57.516: INFO: DNS probes using dns-4423/dns-test-b0f66c74-54bd-4bba-aedc-7593653b0d3c succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:09:57.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4423" for this suite. • [SLOW TEST:16.422 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":182,"skipped":2801,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:09:57.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0104 15:10:00.517775 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 4 15:10:00.517: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:10:00.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8871" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":183,"skipped":2823,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:10:00.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-d19da4f8-6e7e-4189-bc93-0e43f5878cac STEP: Creating a pod to test consume secrets Jan 4 15:10:01.138: INFO: Waiting up to 5m0s for pod "pod-secrets-194d57d0-398e-4ad8-b6a6-23503997f661" in namespace "secrets-8499" to be "success or failure" Jan 4 15:10:01.323: INFO: Pod "pod-secrets-194d57d0-398e-4ad8-b6a6-23503997f661": Phase="Pending", Reason="", readiness=false. Elapsed: 184.466015ms Jan 4 15:10:03.336: INFO: Pod "pod-secrets-194d57d0-398e-4ad8-b6a6-23503997f661": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197042909s Jan 4 15:10:05.351: INFO: Pod "pod-secrets-194d57d0-398e-4ad8-b6a6-23503997f661": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212413634s Jan 4 15:10:07.358: INFO: Pod "pod-secrets-194d57d0-398e-4ad8-b6a6-23503997f661": Phase="Pending", Reason="", readiness=false. Elapsed: 6.219328657s Jan 4 15:10:09.381: INFO: Pod "pod-secrets-194d57d0-398e-4ad8-b6a6-23503997f661": Phase="Pending", Reason="", readiness=false. Elapsed: 8.242642655s Jan 4 15:10:11.416: INFO: Pod "pod-secrets-194d57d0-398e-4ad8-b6a6-23503997f661": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.277683457s STEP: Saw pod success Jan 4 15:10:11.416: INFO: Pod "pod-secrets-194d57d0-398e-4ad8-b6a6-23503997f661" satisfied condition "success or failure" Jan 4 15:10:11.422: INFO: Trying to get logs from node jerma-node pod pod-secrets-194d57d0-398e-4ad8-b6a6-23503997f661 container secret-volume-test: STEP: delete the pod Jan 4 15:10:11.554: INFO: Waiting for pod pod-secrets-194d57d0-398e-4ad8-b6a6-23503997f661 to disappear Jan 4 15:10:11.558: INFO: Pod pod-secrets-194d57d0-398e-4ad8-b6a6-23503997f661 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:10:11.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8499" for this suite. • [SLOW TEST:11.025 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":2836,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:10:11.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 4 15:10:11.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3133' Jan 4 15:10:13.902: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 4 15:10:13.902: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582 Jan 4 15:10:15.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3133' Jan 4 15:10:16.195: INFO: stderr: "" Jan 4 15:10:16.196: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:10:16.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3133" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":185,"skipped":2879,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:10:16.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Jan 4 15:10:16.429: INFO: Waiting up to 5m0s for pod "var-expansion-40db8bbc-eb8b-4e5b-b19a-200a87c47bb2" in namespace "var-expansion-4396" to be "success or failure" Jan 4 15:10:16.453: INFO: Pod "var-expansion-40db8bbc-eb8b-4e5b-b19a-200a87c47bb2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.909173ms Jan 4 15:10:18.462: INFO: Pod "var-expansion-40db8bbc-eb8b-4e5b-b19a-200a87c47bb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032457433s Jan 4 15:10:20.468: INFO: Pod "var-expansion-40db8bbc-eb8b-4e5b-b19a-200a87c47bb2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039149323s Jan 4 15:10:22.476: INFO: Pod "var-expansion-40db8bbc-eb8b-4e5b-b19a-200a87c47bb2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047208202s Jan 4 15:10:24.483: INFO: Pod "var-expansion-40db8bbc-eb8b-4e5b-b19a-200a87c47bb2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053825092s Jan 4 15:10:26.488: INFO: Pod "var-expansion-40db8bbc-eb8b-4e5b-b19a-200a87c47bb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059137248s STEP: Saw pod success Jan 4 15:10:26.488: INFO: Pod "var-expansion-40db8bbc-eb8b-4e5b-b19a-200a87c47bb2" satisfied condition "success or failure" Jan 4 15:10:26.492: INFO: Trying to get logs from node jerma-node pod var-expansion-40db8bbc-eb8b-4e5b-b19a-200a87c47bb2 container dapi-container: STEP: delete the pod Jan 4 15:10:26.533: INFO: Waiting for pod var-expansion-40db8bbc-eb8b-4e5b-b19a-200a87c47bb2 to disappear Jan 4 15:10:26.631: INFO: Pod var-expansion-40db8bbc-eb8b-4e5b-b19a-200a87c47bb2 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:10:26.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4396" for this suite. • [SLOW TEST:10.396 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":2889,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:10:26.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-1888/configmap-test-3c1320df-7754-4f13-a04d-f6d0723aa03c STEP: Creating a pod to test consume configMaps Jan 4 15:10:27.001: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba1b6e91-c752-4895-8877-43887bfec2f5" in namespace "configmap-1888" to be "success or failure" Jan 4 15:10:27.027: INFO: Pod "pod-configmaps-ba1b6e91-c752-4895-8877-43887bfec2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.090391ms Jan 4 15:10:29.033: INFO: Pod "pod-configmaps-ba1b6e91-c752-4895-8877-43887bfec2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031960961s Jan 4 15:10:31.038: INFO: Pod "pod-configmaps-ba1b6e91-c752-4895-8877-43887bfec2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037000492s Jan 4 15:10:33.044: INFO: Pod "pod-configmaps-ba1b6e91-c752-4895-8877-43887bfec2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043512944s Jan 4 15:10:35.050: INFO: Pod "pod-configmaps-ba1b6e91-c752-4895-8877-43887bfec2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048837405s Jan 4 15:10:37.055: INFO: Pod "pod-configmaps-ba1b6e91-c752-4895-8877-43887bfec2f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054426781s STEP: Saw pod success Jan 4 15:10:37.055: INFO: Pod "pod-configmaps-ba1b6e91-c752-4895-8877-43887bfec2f5" satisfied condition "success or failure" Jan 4 15:10:37.059: INFO: Trying to get logs from node jerma-node pod pod-configmaps-ba1b6e91-c752-4895-8877-43887bfec2f5 container env-test: STEP: delete the pod Jan 4 15:10:37.110: INFO: Waiting for pod pod-configmaps-ba1b6e91-c752-4895-8877-43887bfec2f5 to disappear Jan 4 15:10:37.146: INFO: Pod pod-configmaps-ba1b6e91-c752-4895-8877-43887bfec2f5 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:10:37.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1888" for this suite. • [SLOW TEST:10.513 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":2916,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:10:37.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:10:53.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-58" for this suite. • [SLOW TEST:16.262 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":188,"skipped":2925,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:10:53.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 4 15:10:53.547: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af116322-61a6-4ef1-bb83-43f3d4d43ea0" in namespace "downward-api-3554" to be "success or failure" Jan 4 15:10:53.663: INFO: Pod "downwardapi-volume-af116322-61a6-4ef1-bb83-43f3d4d43ea0": Phase="Pending", Reason="", readiness=false. Elapsed: 115.676678ms Jan 4 15:10:55.671: INFO: Pod "downwardapi-volume-af116322-61a6-4ef1-bb83-43f3d4d43ea0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12384368s Jan 4 15:10:57.675: INFO: Pod "downwardapi-volume-af116322-61a6-4ef1-bb83-43f3d4d43ea0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127601611s Jan 4 15:10:59.683: INFO: Pod "downwardapi-volume-af116322-61a6-4ef1-bb83-43f3d4d43ea0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135997429s Jan 4 15:11:01.687: INFO: Pod "downwardapi-volume-af116322-61a6-4ef1-bb83-43f3d4d43ea0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.13943625s Jan 4 15:11:03.691: INFO: Pod "downwardapi-volume-af116322-61a6-4ef1-bb83-43f3d4d43ea0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.143816454s STEP: Saw pod success Jan 4 15:11:03.691: INFO: Pod "downwardapi-volume-af116322-61a6-4ef1-bb83-43f3d4d43ea0" satisfied condition "success or failure" Jan 4 15:11:03.694: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-af116322-61a6-4ef1-bb83-43f3d4d43ea0 container client-container: STEP: delete the pod Jan 4 15:11:03.808: INFO: Waiting for pod downwardapi-volume-af116322-61a6-4ef1-bb83-43f3d4d43ea0 to disappear Jan 4 15:11:03.938: INFO: Pod downwardapi-volume-af116322-61a6-4ef1-bb83-43f3d4d43ea0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:11:03.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3554" for this suite. • [SLOW TEST:10.537 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":2953,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:11:03.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 4 15:11:12.047: INFO: &Pod{ObjectMeta:{send-events-5312fb52-178f-47c4-81a5-25771901dc48 events-6409 /api/v1/namespaces/events-6409/pods/send-events-5312fb52-178f-47c4-81a5-25771901dc48 52f06b22-950f-4f27-83d3-20a72c701f6b 40444 0 2020-01-04 15:11:04 +0000 UTC map[name:foo time:10260167] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qm6j7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qm6j7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qm6j7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 15:11:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 15:11:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 15:11:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 15:11:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-04 15:11:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-04 15:11:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://e2a7517699735c5c52dc4f7d0c09eda0f328581591ae1597c89b4381bf565291,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jan 4 15:11:14.055: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 4 15:11:16.059: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:11:16.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6409" for this suite. • [SLOW TEST:12.241 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":190,"skipped":2956,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:11:16.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:11:33.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4970" for this suite. • [SLOW TEST:17.175 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":191,"skipped":2964,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:11:33.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Jan 4 15:11:33.746: INFO: Waiting up to 5m0s for pod "var-expansion-c506b09b-20f7-4133-b913-0a4cd8e4d966" in namespace "var-expansion-4900" to be "success or failure" Jan 4 15:11:33.764: INFO: Pod "var-expansion-c506b09b-20f7-4133-b913-0a4cd8e4d966": Phase="Pending", Reason="", readiness=false. Elapsed: 17.266084ms Jan 4 15:11:35.780: INFO: Pod "var-expansion-c506b09b-20f7-4133-b913-0a4cd8e4d966": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033254758s Jan 4 15:11:37.787: INFO: Pod "var-expansion-c506b09b-20f7-4133-b913-0a4cd8e4d966": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040213635s Jan 4 15:11:39.845: INFO: Pod "var-expansion-c506b09b-20f7-4133-b913-0a4cd8e4d966": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098744275s Jan 4 15:11:41.850: INFO: Pod "var-expansion-c506b09b-20f7-4133-b913-0a4cd8e4d966": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103636793s Jan 4 15:11:43.868: INFO: Pod "var-expansion-c506b09b-20f7-4133-b913-0a4cd8e4d966": Phase="Pending", Reason="", readiness=false. Elapsed: 10.121088853s Jan 4 15:11:45.878: INFO: Pod "var-expansion-c506b09b-20f7-4133-b913-0a4cd8e4d966": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.131437353s STEP: Saw pod success Jan 4 15:11:45.878: INFO: Pod "var-expansion-c506b09b-20f7-4133-b913-0a4cd8e4d966" satisfied condition "success or failure" Jan 4 15:11:45.883: INFO: Trying to get logs from node jerma-node pod var-expansion-c506b09b-20f7-4133-b913-0a4cd8e4d966 container dapi-container: STEP: delete the pod Jan 4 15:11:45.944: INFO: Waiting for pod var-expansion-c506b09b-20f7-4133-b913-0a4cd8e4d966 to disappear Jan 4 15:11:45.952: INFO: Pod var-expansion-c506b09b-20f7-4133-b913-0a4cd8e4d966 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:11:45.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4900" for this suite. • [SLOW TEST:12.598 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":2973,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:11:45.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 15:11:46.386: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-afcbbaff-684a-4d40-baa6-692c9a47c520" in namespace "security-context-test-9781" to be "success or failure" Jan 4 15:11:46.409: INFO: Pod "alpine-nnp-false-afcbbaff-684a-4d40-baa6-692c9a47c520": Phase="Pending", Reason="", readiness=false. Elapsed: 22.72508ms Jan 4 15:11:48.413: INFO: Pod "alpine-nnp-false-afcbbaff-684a-4d40-baa6-692c9a47c520": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027203031s Jan 4 15:11:50.417: INFO: Pod "alpine-nnp-false-afcbbaff-684a-4d40-baa6-692c9a47c520": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030838891s Jan 4 15:11:52.424: INFO: Pod "alpine-nnp-false-afcbbaff-684a-4d40-baa6-692c9a47c520": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038187487s Jan 4 15:11:54.638: INFO: Pod "alpine-nnp-false-afcbbaff-684a-4d40-baa6-692c9a47c520": Phase="Pending", Reason="", readiness=false. Elapsed: 8.251486681s Jan 4 15:11:56.643: INFO: Pod "alpine-nnp-false-afcbbaff-684a-4d40-baa6-692c9a47c520": Phase="Pending", Reason="", readiness=false. Elapsed: 10.256906479s Jan 4 15:11:58.649: INFO: Pod "alpine-nnp-false-afcbbaff-684a-4d40-baa6-692c9a47c520": Phase="Pending", Reason="", readiness=false. Elapsed: 12.26251137s Jan 4 15:12:01.195: INFO: Pod "alpine-nnp-false-afcbbaff-684a-4d40-baa6-692c9a47c520": Phase="Pending", Reason="", readiness=false. Elapsed: 14.809211247s Jan 4 15:12:03.250: INFO: Pod "alpine-nnp-false-afcbbaff-684a-4d40-baa6-692c9a47c520": Phase="Pending", Reason="", readiness=false. Elapsed: 16.863887434s Jan 4 15:12:05.255: INFO: Pod "alpine-nnp-false-afcbbaff-684a-4d40-baa6-692c9a47c520": Phase="Pending", Reason="", readiness=false. Elapsed: 18.868471119s Jan 4 15:12:07.284: INFO: Pod "alpine-nnp-false-afcbbaff-684a-4d40-baa6-692c9a47c520": Phase="Pending", Reason="", readiness=false. Elapsed: 20.897364271s Jan 4 15:12:09.300: INFO: Pod "alpine-nnp-false-afcbbaff-684a-4d40-baa6-692c9a47c520": Phase="Pending", Reason="", readiness=false. Elapsed: 22.913674058s Jan 4 15:12:11.304: INFO: Pod "alpine-nnp-false-afcbbaff-684a-4d40-baa6-692c9a47c520": Phase="Pending", Reason="", readiness=false. Elapsed: 24.917863991s Jan 4 15:12:13.308: INFO: Pod "alpine-nnp-false-afcbbaff-684a-4d40-baa6-692c9a47c520": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.921542046s Jan 4 15:12:13.308: INFO: Pod "alpine-nnp-false-afcbbaff-684a-4d40-baa6-692c9a47c520" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:12:13.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9781" for this suite. • [SLOW TEST:27.354 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":2983,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:12:13.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jan 4 15:12:13.612: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 4 15:12:13.628: INFO: Waiting for terminating namespaces to be deleted... Jan 4 15:12:13.630: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 4 15:12:13.636: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 4 15:12:13.636: INFO: Container weave ready: true, restart count 1 Jan 4 15:12:13.636: INFO: Container weave-npc ready: true, restart count 0 Jan 4 15:12:13.636: INFO: alpine-nnp-false-afcbbaff-684a-4d40-baa6-692c9a47c520 from security-context-test-9781 started at 2020-01-04 15:11:48 +0000 UTC (1 container statuses recorded) Jan 4 15:12:13.636: INFO: Container alpine-nnp-false-afcbbaff-684a-4d40-baa6-692c9a47c520 ready: false, restart count 0 Jan 4 15:12:13.636: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 4 15:12:13.636: INFO: Container kube-proxy ready: true, restart count 0 Jan 4 15:12:13.636: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 4 15:12:13.650: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 4 15:12:13.650: INFO: Container coredns ready: true, restart count 0 Jan 4 15:12:13.650: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 4 15:12:13.650: INFO: Container coredns ready: true, restart count 0 Jan 4 15:12:13.650: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 4 15:12:13.650: INFO: Container kube-controller-manager ready: true, restart count 1 Jan 4 15:12:13.650: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 4 15:12:13.650: INFO: Container kube-proxy ready: true, restart count 0 Jan 4 15:12:13.650: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 4 15:12:13.650: INFO: Container weave ready: true, restart count 0 Jan 4 15:12:13.650: INFO: Container weave-npc ready: true, restart count 0 Jan 4 15:12:13.650: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 4 15:12:13.650: INFO: Container kube-scheduler ready: true, restart count 2 Jan 4 15:12:13.650: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 4 15:12:13.650: INFO: Container kube-apiserver ready: true, restart count 1 Jan 4 15:12:13.650: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 4 15:12:13.650: INFO: Container etcd ready: true, restart count 1 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15e6b7bb1a0a58a2], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.15e6b7bb2d7a1deb], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:12:14.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8706" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":194,"skipped":2983,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:12:14.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 4 15:12:15.020: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7780 /api/v1/namespaces/watch-7780/configmaps/e2e-watch-test-watch-closed 6ded3c39-d0c0-4471-bb53-d4cd672b711f 40692 0 2020-01-04 15:12:14 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 4 15:12:15.020: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7780 /api/v1/namespaces/watch-7780/configmaps/e2e-watch-test-watch-closed 6ded3c39-d0c0-4471-bb53-d4cd672b711f 40693 0 2020-01-04 15:12:14 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 4 15:12:15.083: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7780 /api/v1/namespaces/watch-7780/configmaps/e2e-watch-test-watch-closed 6ded3c39-d0c0-4471-bb53-d4cd672b711f 40694 0 2020-01-04 15:12:14 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 4 15:12:15.083: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7780 /api/v1/namespaces/watch-7780/configmaps/e2e-watch-test-watch-closed 6ded3c39-d0c0-4471-bb53-d4cd672b711f 40695 0 2020-01-04 15:12:14 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:12:15.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7780" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":195,"skipped":2991,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:12:15.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jan 4 15:12:15.313: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:12:45.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5636" for this suite. • [SLOW TEST:30.405 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":196,"skipped":2993,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:12:45.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-5008/configmap-test-809baefd-4b11-47c3-87e4-ff4557b5eac8 STEP: Creating a pod to test consume configMaps Jan 4 15:12:45.637: INFO: Waiting up to 5m0s for pod "pod-configmaps-0a90a70b-f8fa-4e30-8a1e-eb7330e01881" in namespace "configmap-5008" to be "success or failure" Jan 4 15:12:45.644: INFO: Pod "pod-configmaps-0a90a70b-f8fa-4e30-8a1e-eb7330e01881": Phase="Pending", Reason="", readiness=false. Elapsed: 7.084714ms Jan 4 15:12:47.648: INFO: Pod "pod-configmaps-0a90a70b-f8fa-4e30-8a1e-eb7330e01881": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011282957s Jan 4 15:12:49.652: INFO: Pod "pod-configmaps-0a90a70b-f8fa-4e30-8a1e-eb7330e01881": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015088116s Jan 4 15:12:51.657: INFO: Pod "pod-configmaps-0a90a70b-f8fa-4e30-8a1e-eb7330e01881": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019699715s Jan 4 15:12:53.661: INFO: Pod "pod-configmaps-0a90a70b-f8fa-4e30-8a1e-eb7330e01881": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023956322s Jan 4 15:12:55.665: INFO: Pod "pod-configmaps-0a90a70b-f8fa-4e30-8a1e-eb7330e01881": Phase="Pending", Reason="", readiness=false. Elapsed: 10.027816558s Jan 4 15:12:57.669: INFO: Pod "pod-configmaps-0a90a70b-f8fa-4e30-8a1e-eb7330e01881": Phase="Pending", Reason="", readiness=false. Elapsed: 12.032513449s Jan 4 15:12:59.675: INFO: Pod "pod-configmaps-0a90a70b-f8fa-4e30-8a1e-eb7330e01881": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.037884626s STEP: Saw pod success Jan 4 15:12:59.675: INFO: Pod "pod-configmaps-0a90a70b-f8fa-4e30-8a1e-eb7330e01881" satisfied condition "success or failure" Jan 4 15:12:59.678: INFO: Trying to get logs from node jerma-node pod pod-configmaps-0a90a70b-f8fa-4e30-8a1e-eb7330e01881 container env-test: STEP: delete the pod Jan 4 15:12:59.713: INFO: Waiting for pod pod-configmaps-0a90a70b-f8fa-4e30-8a1e-eb7330e01881 to disappear Jan 4 15:12:59.741: INFO: Pod pod-configmaps-0a90a70b-f8fa-4e30-8a1e-eb7330e01881 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:12:59.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5008" for this suite. • [SLOW TEST:14.188 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3012,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:12:59.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:13:09.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3048" for this suite. • [SLOW TEST:10.201 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3030,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:13:09.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8965 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8965 STEP: creating replication controller externalsvc in namespace services-8965 I0104 15:13:10.273750 9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-8965, replica count: 2 I0104 15:13:13.324489 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 15:13:16.324751 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 15:13:19.325038 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 15:13:22.325327 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jan 4 15:13:22.365: INFO: Creating new exec pod Jan 4 15:13:30.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8965 execpoddkgb5 -- /bin/sh -x -c nslookup clusterip-service' Jan 4 15:13:30.913: INFO: stderr: "I0104 15:13:30.702945 3739 log.go:172] (0xc0008d89a0) (0xc0006b0500) Create stream\nI0104 15:13:30.703175 3739 log.go:172] (0xc0008d89a0) (0xc0006b0500) Stream added, broadcasting: 1\nI0104 15:13:30.710937 3739 log.go:172] (0xc0008d89a0) Reply frame received for 1\nI0104 15:13:30.710979 3739 log.go:172] (0xc0008d89a0) (0xc0008d2000) Create stream\nI0104 15:13:30.710993 3739 log.go:172] (0xc0008d89a0) (0xc0008d2000) Stream added, broadcasting: 3\nI0104 15:13:30.712496 3739 log.go:172] (0xc0008d89a0) Reply frame received for 3\nI0104 15:13:30.712518 3739 log.go:172] (0xc0008d89a0) (0xc0006b0780) Create stream\nI0104 15:13:30.712526 3739 log.go:172] (0xc0008d89a0) (0xc0006b0780) Stream added, broadcasting: 5\nI0104 15:13:30.713747 3739 log.go:172] (0xc0008d89a0) Reply frame received for 5\nI0104 15:13:30.778380 3739 log.go:172] (0xc0008d89a0) Data frame received for 5\nI0104 15:13:30.778430 3739 log.go:172] (0xc0006b0780) (5) Data frame handling\nI0104 15:13:30.778446 3739 log.go:172] (0xc0006b0780) (5) Data frame sent\n+ nslookup clusterip-service\nI0104 15:13:30.795988 3739 log.go:172] (0xc0008d89a0) Data frame received for 3\nI0104 15:13:30.796061 3739 log.go:172] (0xc0008d2000) (3) Data frame handling\nI0104 15:13:30.796084 3739 log.go:172] (0xc0008d2000) (3) Data frame sent\nI0104 15:13:30.797165 3739 log.go:172] (0xc0008d89a0) Data frame received for 3\nI0104 15:13:30.797174 3739 log.go:172] (0xc0008d2000) (3) Data frame handling\nI0104 15:13:30.797184 3739 log.go:172] (0xc0008d2000) (3) Data frame sent\nI0104 15:13:30.907197 3739 log.go:172] (0xc0008d89a0) (0xc0008d2000) Stream removed, broadcasting: 3\nI0104 15:13:30.907442 3739 log.go:172] (0xc0008d89a0) Data frame received for 1\nI0104 15:13:30.907456 3739 log.go:172] (0xc0006b0500) (1) Data frame handling\nI0104 15:13:30.907462 3739 log.go:172] (0xc0006b0500) (1) Data frame sent\nI0104 15:13:30.907498 3739 log.go:172] (0xc0008d89a0) (0xc0006b0500) Stream removed, broadcasting: 1\nI0104 15:13:30.907719 3739 log.go:172] (0xc0008d89a0) (0xc0006b0780) Stream removed, broadcasting: 5\nI0104 15:13:30.907786 3739 log.go:172] (0xc0008d89a0) Go away received\nI0104 15:13:30.907810 3739 log.go:172] (0xc0008d89a0) (0xc0006b0500) Stream removed, broadcasting: 1\nI0104 15:13:30.907824 3739 log.go:172] (0xc0008d89a0) (0xc0008d2000) Stream removed, broadcasting: 3\nI0104 15:13:30.907879 3739 log.go:172] (0xc0008d89a0) (0xc0006b0780) Stream removed, broadcasting: 5\n" Jan 4 15:13:30.914: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-8965.svc.cluster.local\tcanonical name = externalsvc.services-8965.svc.cluster.local.\nName:\texternalsvc.services-8965.svc.cluster.local\nAddress: 10.96.131.113\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8965, will wait for the garbage collector to delete the pods Jan 4 15:13:30.977: INFO: Deleting ReplicationController externalsvc took: 6.536231ms Jan 4 15:13:31.477: INFO: Terminating ReplicationController externalsvc pods took: 500.431821ms Jan 4 15:13:42.573: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:13:42.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8965" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:32.745 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":199,"skipped":3043,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:13:42.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0104 15:13:53.390781 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 4 15:13:53.390: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:13:53.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4883" for this suite. • [SLOW TEST:11.805 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":200,"skipped":3051,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:13:54.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 4 15:13:54.692: INFO: Waiting up to 5m0s for pod "pod-b1c8059f-5843-4927-a710-03f337b76e83" in namespace "emptydir-16" to be "success or failure" Jan 4 15:13:54.828: INFO: Pod "pod-b1c8059f-5843-4927-a710-03f337b76e83": Phase="Pending", Reason="", readiness=false. Elapsed: 136.218633ms Jan 4 15:13:56.834: INFO: Pod "pod-b1c8059f-5843-4927-a710-03f337b76e83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141320979s Jan 4 15:13:58.837: INFO: Pod "pod-b1c8059f-5843-4927-a710-03f337b76e83": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144964085s Jan 4 15:14:00.841: INFO: Pod "pod-b1c8059f-5843-4927-a710-03f337b76e83": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148398699s Jan 4 15:14:02.851: INFO: Pod "pod-b1c8059f-5843-4927-a710-03f337b76e83": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158433778s Jan 4 15:14:04.929: INFO: Pod "pod-b1c8059f-5843-4927-a710-03f337b76e83": Phase="Pending", Reason="", readiness=false. Elapsed: 10.236493191s Jan 4 15:14:06.933: INFO: Pod "pod-b1c8059f-5843-4927-a710-03f337b76e83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.240539654s STEP: Saw pod success Jan 4 15:14:06.933: INFO: Pod "pod-b1c8059f-5843-4927-a710-03f337b76e83" satisfied condition "success or failure" Jan 4 15:14:06.935: INFO: Trying to get logs from node jerma-node pod pod-b1c8059f-5843-4927-a710-03f337b76e83 container test-container: STEP: delete the pod Jan 4 15:14:06.963: INFO: Waiting for pod pod-b1c8059f-5843-4927-a710-03f337b76e83 to disappear Jan 4 15:14:06.980: INFO: Pod pod-b1c8059f-5843-4927-a710-03f337b76e83 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:14:06.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-16" for this suite. • [SLOW TEST:12.507 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3078,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:14:07.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Jan 4 15:14:15.175: INFO: Pod pod-hostip-1d6abd71-3406-4ef8-b62f-ea7152f8b561 has hostIP: 10.96.2.250 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:14:15.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-561" for this suite. • [SLOW TEST:8.215 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3100,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:14:15.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 4 15:14:15.295: INFO: Waiting up to 5m0s for pod "pod-9de58c5f-a9aa-4574-8e94-4ddc70d4cfac" in namespace "emptydir-3751" to be "success or failure" Jan 4 15:14:15.309: INFO: Pod "pod-9de58c5f-a9aa-4574-8e94-4ddc70d4cfac": Phase="Pending", Reason="", readiness=false. Elapsed: 14.1911ms Jan 4 15:14:17.314: INFO: Pod "pod-9de58c5f-a9aa-4574-8e94-4ddc70d4cfac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019484617s Jan 4 15:14:19.319: INFO: Pod "pod-9de58c5f-a9aa-4574-8e94-4ddc70d4cfac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024480057s Jan 4 15:14:21.324: INFO: Pod "pod-9de58c5f-a9aa-4574-8e94-4ddc70d4cfac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028944148s Jan 4 15:14:23.337: INFO: Pod "pod-9de58c5f-a9aa-4574-8e94-4ddc70d4cfac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.042313518s STEP: Saw pod success Jan 4 15:14:23.337: INFO: Pod "pod-9de58c5f-a9aa-4574-8e94-4ddc70d4cfac" satisfied condition "success or failure" Jan 4 15:14:23.340: INFO: Trying to get logs from node jerma-node pod pod-9de58c5f-a9aa-4574-8e94-4ddc70d4cfac container test-container: STEP: delete the pod Jan 4 15:14:23.582: INFO: Waiting for pod pod-9de58c5f-a9aa-4574-8e94-4ddc70d4cfac to disappear Jan 4 15:14:23.587: INFO: Pod pod-9de58c5f-a9aa-4574-8e94-4ddc70d4cfac no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:14:23.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3751" for this suite. • [SLOW TEST:8.378 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3102,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:14:23.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:14:34.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-442" for this suite. • [SLOW TEST:10.569 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":204,"skipped":3106,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:14:34.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 4 15:14:34.362: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33e8f07a-3b35-485a-a638-efa1dabe9ddd" in namespace "projected-9129" to be "success or failure" Jan 4 15:14:34.388: INFO: Pod "downwardapi-volume-33e8f07a-3b35-485a-a638-efa1dabe9ddd": Phase="Pending", Reason="", readiness=false. Elapsed: 26.081016ms Jan 4 15:14:36.394: INFO: Pod "downwardapi-volume-33e8f07a-3b35-485a-a638-efa1dabe9ddd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032548343s Jan 4 15:14:38.403: INFO: Pod "downwardapi-volume-33e8f07a-3b35-485a-a638-efa1dabe9ddd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040796666s Jan 4 15:14:40.408: INFO: Pod "downwardapi-volume-33e8f07a-3b35-485a-a638-efa1dabe9ddd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045692319s Jan 4 15:14:42.416: INFO: Pod "downwardapi-volume-33e8f07a-3b35-485a-a638-efa1dabe9ddd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054300642s Jan 4 15:14:44.421: INFO: Pod "downwardapi-volume-33e8f07a-3b35-485a-a638-efa1dabe9ddd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.058687306s Jan 4 15:14:46.431: INFO: Pod "downwardapi-volume-33e8f07a-3b35-485a-a638-efa1dabe9ddd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.068644307s Jan 4 15:14:48.438: INFO: Pod "downwardapi-volume-33e8f07a-3b35-485a-a638-efa1dabe9ddd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.076434286s STEP: Saw pod success Jan 4 15:14:48.439: INFO: Pod "downwardapi-volume-33e8f07a-3b35-485a-a638-efa1dabe9ddd" satisfied condition "success or failure" Jan 4 15:14:48.447: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-33e8f07a-3b35-485a-a638-efa1dabe9ddd container client-container: STEP: delete the pod Jan 4 15:14:48.664: INFO: Waiting for pod downwardapi-volume-33e8f07a-3b35-485a-a638-efa1dabe9ddd to disappear Jan 4 15:14:48.675: INFO: Pod downwardapi-volume-33e8f07a-3b35-485a-a638-efa1dabe9ddd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:14:48.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9129" for this suite. • [SLOW TEST:14.510 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3118,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:14:48.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 4 15:14:48.889: INFO: Waiting up to 5m0s for pod "pod-0e77efef-5b9b-421f-afbf-4945e515cecf" in namespace "emptydir-9548" to be "success or failure" Jan 4 15:14:48.936: INFO: Pod "pod-0e77efef-5b9b-421f-afbf-4945e515cecf": Phase="Pending", Reason="", readiness=false. Elapsed: 46.892783ms Jan 4 15:14:50.940: INFO: Pod "pod-0e77efef-5b9b-421f-afbf-4945e515cecf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051225381s Jan 4 15:14:52.952: INFO: Pod "pod-0e77efef-5b9b-421f-afbf-4945e515cecf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063645002s Jan 4 15:14:54.958: INFO: Pod "pod-0e77efef-5b9b-421f-afbf-4945e515cecf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069179062s Jan 4 15:14:56.962: INFO: Pod "pod-0e77efef-5b9b-421f-afbf-4945e515cecf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073735821s Jan 4 15:14:58.976: INFO: Pod "pod-0e77efef-5b9b-421f-afbf-4945e515cecf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.086918396s Jan 4 15:15:00.981: INFO: Pod "pod-0e77efef-5b9b-421f-afbf-4945e515cecf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.092178615s STEP: Saw pod success Jan 4 15:15:00.981: INFO: Pod "pod-0e77efef-5b9b-421f-afbf-4945e515cecf" satisfied condition "success or failure" Jan 4 15:15:00.984: INFO: Trying to get logs from node jerma-node pod pod-0e77efef-5b9b-421f-afbf-4945e515cecf container test-container: STEP: delete the pod Jan 4 15:15:01.025: INFO: Waiting for pod pod-0e77efef-5b9b-421f-afbf-4945e515cecf to disappear Jan 4 15:15:01.029: INFO: Pod pod-0e77efef-5b9b-421f-afbf-4945e515cecf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:15:01.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9548" for this suite. • [SLOW TEST:12.351 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3124,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:15:01.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 15:15:01.214: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 4 15:15:01.225: INFO: Number of nodes with available pods: 0 Jan 4 15:15:01.226: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 4 15:15:01.654: INFO: Number of nodes with available pods: 0 Jan 4 15:15:01.654: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:02.659: INFO: Number of nodes with available pods: 0 Jan 4 15:15:02.659: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:03.663: INFO: Number of nodes with available pods: 0 Jan 4 15:15:03.664: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:04.662: INFO: Number of nodes with available pods: 0 Jan 4 15:15:04.662: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:05.660: INFO: Number of nodes with available pods: 0 Jan 4 15:15:05.660: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:06.658: INFO: Number of nodes with available pods: 0 Jan 4 15:15:06.658: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:07.672: INFO: Number of nodes with available pods: 0 Jan 4 15:15:07.672: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:08.659: INFO: Number of nodes with available pods: 0 Jan 4 15:15:08.659: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:09.660: INFO: Number of nodes with available pods: 1 Jan 4 15:15:09.660: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 4 15:15:09.835: INFO: Number of nodes with available pods: 1 Jan 4 15:15:09.835: INFO: Number of running nodes: 0, number of available pods: 1 Jan 4 15:15:10.841: INFO: Number of nodes with available pods: 0 Jan 4 15:15:10.841: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 4 15:15:10.862: INFO: Number of nodes with available pods: 0 Jan 4 15:15:10.862: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:11.871: INFO: Number of nodes with available pods: 0 Jan 4 15:15:11.871: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:12.869: INFO: Number of nodes with available pods: 0 Jan 4 15:15:12.869: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:13.873: INFO: Number of nodes with available pods: 0 Jan 4 15:15:13.873: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:14.876: INFO: Number of nodes with available pods: 0 Jan 4 15:15:14.876: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:15.867: INFO: Number of nodes with available pods: 0 Jan 4 15:15:15.867: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:16.922: INFO: Number of nodes with available pods: 0 Jan 4 15:15:16.922: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:17.870: INFO: Number of nodes with available pods: 0 Jan 4 15:15:17.870: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:18.874: INFO: Number of nodes with available pods: 0 Jan 4 15:15:18.874: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:19.867: INFO: Number of nodes with available pods: 0 Jan 4 15:15:19.868: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:20.868: INFO: Number of nodes with available pods: 0 Jan 4 15:15:20.868: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:21.869: INFO: Number of nodes with available pods: 0 Jan 4 15:15:21.869: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:22.869: INFO: Number of nodes with available pods: 0 Jan 4 15:15:22.869: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:23.884: INFO: Number of nodes with available pods: 0 Jan 4 15:15:23.885: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:24.870: INFO: Number of nodes with available pods: 0 Jan 4 15:15:24.870: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:25.868: INFO: Number of nodes with available pods: 0 Jan 4 15:15:25.868: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:26.868: INFO: Number of nodes with available pods: 0 Jan 4 15:15:26.869: INFO: Node jerma-node is running more than one daemon pod Jan 4 15:15:27.880: INFO: Number of nodes with available pods: 1 Jan 4 15:15:27.880: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8380, will wait for the garbage collector to delete the pods Jan 4 15:15:27.944: INFO: Deleting DaemonSet.extensions daemon-set took: 6.225516ms Jan 4 15:15:28.345: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.622708ms Jan 4 15:15:42.659: INFO: Number of nodes with available pods: 0 Jan 4 15:15:42.659: INFO: Number of running nodes: 0, number of available pods: 0 Jan 4 15:15:42.661: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8380/daemonsets","resourceVersion":"41589"},"items":null} Jan 4 15:15:42.664: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8380/pods","resourceVersion":"41589"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:15:42.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8380" for this suite. • [SLOW TEST:41.680 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":207,"skipped":3142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:15:42.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 4 15:16:00.893: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 4 15:16:00.899: INFO: Pod pod-with-prestop-http-hook still exists Jan 4 15:16:02.899: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 4 15:16:02.941: INFO: Pod pod-with-prestop-http-hook still exists Jan 4 15:16:04.899: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 4 15:16:04.904: INFO: Pod pod-with-prestop-http-hook still exists Jan 4 15:16:06.899: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 4 15:16:06.903: INFO: Pod pod-with-prestop-http-hook still exists Jan 4 15:16:08.899: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 4 15:16:08.907: INFO: Pod pod-with-prestop-http-hook still exists Jan 4 15:16:10.899: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 4 15:16:11.158: INFO: Pod pod-with-prestop-http-hook still exists Jan 4 15:16:12.899: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 4 15:16:12.905: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:16:12.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5270" for this suite. • [SLOW TEST:30.235 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3228,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:16:12.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-3724/secret-test-53d11ce9-50a8-426e-b98d-6e94e183a450 STEP: Creating a pod to test consume secrets Jan 4 15:16:13.110: INFO: Waiting up to 5m0s for pod "pod-configmaps-0bb64868-295f-4ebc-b759-867fcef9131c" in namespace "secrets-3724" to be "success or failure" Jan 4 15:16:13.116: INFO: Pod "pod-configmaps-0bb64868-295f-4ebc-b759-867fcef9131c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.419321ms Jan 4 15:16:15.120: INFO: Pod "pod-configmaps-0bb64868-295f-4ebc-b759-867fcef9131c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009637596s Jan 4 15:16:17.124: INFO: Pod "pod-configmaps-0bb64868-295f-4ebc-b759-867fcef9131c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013890319s Jan 4 15:16:19.129: INFO: Pod "pod-configmaps-0bb64868-295f-4ebc-b759-867fcef9131c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018958018s Jan 4 15:16:21.134: INFO: Pod "pod-configmaps-0bb64868-295f-4ebc-b759-867fcef9131c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023925839s Jan 4 15:16:23.139: INFO: Pod "pod-configmaps-0bb64868-295f-4ebc-b759-867fcef9131c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028730922s Jan 4 15:16:25.143: INFO: Pod "pod-configmaps-0bb64868-295f-4ebc-b759-867fcef9131c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.032705118s STEP: Saw pod success Jan 4 15:16:25.143: INFO: Pod "pod-configmaps-0bb64868-295f-4ebc-b759-867fcef9131c" satisfied condition "success or failure" Jan 4 15:16:25.145: INFO: Trying to get logs from node jerma-node pod pod-configmaps-0bb64868-295f-4ebc-b759-867fcef9131c container env-test: STEP: delete the pod Jan 4 15:16:25.197: INFO: Waiting for pod pod-configmaps-0bb64868-295f-4ebc-b759-867fcef9131c to disappear Jan 4 15:16:25.218: INFO: Pod pod-configmaps-0bb64868-295f-4ebc-b759-867fcef9131c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:16:25.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3724" for this suite. • [SLOW TEST:12.270 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3237,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:16:25.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-f13571bb-c78a-4fb6-aad1-11d894031b89 in namespace container-probe-3271 Jan 4 15:16:37.468: INFO: Started pod liveness-f13571bb-c78a-4fb6-aad1-11d894031b89 in namespace container-probe-3271 STEP: checking the pod's current state and verifying that restartCount is present Jan 4 15:16:37.473: INFO: Initial restart count of pod liveness-f13571bb-c78a-4fb6-aad1-11d894031b89 is 0 Jan 4 15:17:07.632: INFO: Restart count of pod container-probe-3271/liveness-f13571bb-c78a-4fb6-aad1-11d894031b89 is now 1 (30.158385984s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:17:07.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3271" for this suite. • [SLOW TEST:42.500 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3248,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:17:07.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jan 4 15:17:07.802: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 4 15:17:07.870: INFO: Waiting for terminating namespaces to be deleted... Jan 4 15:17:07.873: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 4 15:17:07.896: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 4 15:17:07.896: INFO: Container weave ready: true, restart count 1 Jan 4 15:17:07.896: INFO: Container weave-npc ready: true, restart count 0 Jan 4 15:17:07.896: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 4 15:17:07.896: INFO: Container kube-proxy ready: true, restart count 0 Jan 4 15:17:07.896: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 4 15:17:07.916: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 4 15:17:07.916: INFO: Container kube-apiserver ready: true, restart count 1 Jan 4 15:17:07.916: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 4 15:17:07.916: INFO: Container etcd ready: true, restart count 1 Jan 4 15:17:07.916: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 4 15:17:07.916: INFO: Container coredns ready: true, restart count 0 Jan 4 15:17:07.916: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 4 15:17:07.916: INFO: Container coredns ready: true, restart count 0 Jan 4 15:17:07.916: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 4 15:17:07.916: INFO: Container kube-controller-manager ready: true, restart count 1 Jan 4 15:17:07.916: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 4 15:17:07.916: INFO: Container kube-proxy ready: true, restart count 0 Jan 4 15:17:07.916: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 4 15:17:07.916: INFO: Container weave ready: true, restart count 0 Jan 4 15:17:07.916: INFO: Container weave-npc ready: true, restart count 0 Jan 4 15:17:07.916: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 4 15:17:07.916: INFO: Container kube-scheduler ready: true, restart count 2 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d38dcd52-8940-4f47-9b83-1d2de88e917f 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-d38dcd52-8940-4f47-9b83-1d2de88e917f off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-d38dcd52-8940-4f47-9b83-1d2de88e917f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:17:50.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7944" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:42.432 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":211,"skipped":3252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:17:50.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 15:17:50.225: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 4 15:17:50.244: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 4 15:17:55.279: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 4 15:17:57.383: INFO: Creating deployment "test-rolling-update-deployment" Jan 4 15:17:57.390: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 4 15:17:57.407: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 4 15:17:59.417: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 4 15:17:59.421: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 15:18:01.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 15:18:03.435: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 15:18:05.425: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 15:18:07.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 15:18:09.424: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747877, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 15:18:11.424: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jan 4 15:18:11.432: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6552 /apis/apps/v1/namespaces/deployment-6552/deployments/test-rolling-update-deployment 9ab983e4-7b88-4540-89e8-be43a5c560b4 42164 1 2020-01-04 15:17:57 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cbb068 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-04 15:17:57 +0000 UTC,LastTransitionTime:2020-01-04 15:17:57 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-01-04 15:18:11 +0000 UTC,LastTransitionTime:2020-01-04 15:17:57 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 4 15:18:11.433: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-6552 /apis/apps/v1/namespaces/deployment-6552/replicasets/test-rolling-update-deployment-67cf4f6444 988ae421-783d-4f67-bf02-666aecb4af99 42154 1 2020-01-04 15:17:57 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 9ab983e4-7b88-4540-89e8-be43a5c560b4 0xc001b7ac57 0xc001b7ac58}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001b7acc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 4 15:18:11.433: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 4 15:18:11.433: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6552 /apis/apps/v1/namespaces/deployment-6552/replicasets/test-rolling-update-controller 3985c816-99b9-4d98-8509-323d2257fd92 42163 2 2020-01-04 15:17:50 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 9ab983e4-7b88-4540-89e8-be43a5c560b4 0xc001b7ab87 0xc001b7ab88}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001b7abe8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 4 15:18:11.435: INFO: Pod "test-rolling-update-deployment-67cf4f6444-9v877" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-9v877 test-rolling-update-deployment-67cf4f6444- deployment-6552 /api/v1/namespaces/deployment-6552/pods/test-rolling-update-deployment-67cf4f6444-9v877 3513ef6f-b722-4d69-b99f-9b22666697bc 42153 0 2020-01-04 15:17:57 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 988ae421-783d-4f67-bf02-666aecb4af99 0xc002cbbeb7 0xc002cbbeb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wwgn2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wwgn2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wwgn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 15:17:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 15:18:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 15:18:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-04 15:17:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-04 15:17:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-04 15:18:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://905a8ab9f4bf6425f08409d0b13f02fa6da2648b9e8e3107e32a04d6b9234326,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:18:11.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6552" for this suite. • [SLOW TEST:21.281 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":212,"skipped":3283,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:18:11.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7012 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7012 STEP: creating replication controller externalsvc in namespace services-7012 I0104 15:18:11.785567 9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7012, replica count: 2 I0104 15:18:14.836143 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 15:18:17.836470 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 15:18:20.836704 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 15:18:23.836987 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 15:18:26.837463 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jan 4 15:18:26.948: INFO: Creating new exec pod Jan 4 15:18:35.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7012 execpodwhqwp -- /bin/sh -x -c nslookup nodeport-service' Jan 4 15:18:35.368: INFO: stderr: "I0104 15:18:35.129362 3753 log.go:172] (0xc0008b6b00) (0xc00054fd60) Create stream\nI0104 15:18:35.129477 3753 log.go:172] (0xc0008b6b00) (0xc00054fd60) Stream added, broadcasting: 1\nI0104 15:18:35.132894 3753 log.go:172] (0xc0008b6b00) Reply frame received for 1\nI0104 15:18:35.132914 3753 log.go:172] (0xc0008b6b00) (0xc000a721e0) Create stream\nI0104 15:18:35.132919 3753 log.go:172] (0xc0008b6b00) (0xc000a721e0) Stream added, broadcasting: 3\nI0104 15:18:35.133772 3753 log.go:172] (0xc0008b6b00) Reply frame received for 3\nI0104 15:18:35.133788 3753 log.go:172] (0xc0008b6b00) (0xc000a72280) Create stream\nI0104 15:18:35.133793 3753 log.go:172] (0xc0008b6b00) (0xc000a72280) Stream added, broadcasting: 5\nI0104 15:18:35.134954 3753 log.go:172] (0xc0008b6b00) Reply frame received for 5\nI0104 15:18:35.229706 3753 log.go:172] (0xc0008b6b00) Data frame received for 5\nI0104 15:18:35.229765 3753 log.go:172] (0xc000a72280) (5) Data frame handling\nI0104 15:18:35.229774 3753 log.go:172] (0xc000a72280) (5) Data frame sent\n+ nslookup nodeport-service\nI0104 15:18:35.260233 3753 log.go:172] (0xc0008b6b00) Data frame received for 3\nI0104 15:18:35.260307 3753 log.go:172] (0xc000a721e0) (3) Data frame handling\nI0104 15:18:35.260318 3753 log.go:172] (0xc000a721e0) (3) Data frame sent\nI0104 15:18:35.260327 3753 log.go:172] (0xc0008b6b00) Data frame received for 3\nI0104 15:18:35.260340 3753 log.go:172] (0xc000a721e0) (3) Data frame handling\nI0104 15:18:35.260371 3753 log.go:172] (0xc000a721e0) (3) Data frame sent\nI0104 15:18:35.363852 3753 log.go:172] (0xc0008b6b00) (0xc000a721e0) Stream removed, broadcasting: 3\nI0104 15:18:35.363917 3753 log.go:172] (0xc0008b6b00) Data frame received for 1\nI0104 15:18:35.363925 3753 log.go:172] (0xc00054fd60) (1) Data frame handling\nI0104 15:18:35.363932 3753 log.go:172] (0xc00054fd60) (1) Data frame sent\nI0104 15:18:35.363938 3753 log.go:172] (0xc0008b6b00) (0xc00054fd60) Stream removed, broadcasting: 1\nI0104 15:18:35.364147 3753 log.go:172] (0xc0008b6b00) (0xc000a72280) Stream removed, broadcasting: 5\nI0104 15:18:35.364170 3753 log.go:172] (0xc0008b6b00) (0xc00054fd60) Stream removed, broadcasting: 1\nI0104 15:18:35.364177 3753 log.go:172] (0xc0008b6b00) (0xc000a721e0) Stream removed, broadcasting: 3\nI0104 15:18:35.364183 3753 log.go:172] (0xc0008b6b00) (0xc000a72280) Stream removed, broadcasting: 5\nI0104 15:18:35.364305 3753 log.go:172] (0xc0008b6b00) Go away received\n" Jan 4 15:18:35.369: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7012.svc.cluster.local\tcanonical name = externalsvc.services-7012.svc.cluster.local.\nName:\texternalsvc.services-7012.svc.cluster.local\nAddress: 10.96.180.213\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7012, will wait for the garbage collector to delete the pods Jan 4 15:18:35.429: INFO: Deleting ReplicationController externalsvc took: 5.664638ms Jan 4 15:18:35.829: INFO: Terminating ReplicationController externalsvc pods took: 400.321908ms Jan 4 15:18:53.292: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:18:53.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7012" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:41.908 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":213,"skipped":3335,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:18:53.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 4 15:18:53.523: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98f4a447-07a3-4d5f-b20c-a35e522f5692" in namespace "projected-6047" to be "success or failure" Jan 4 15:18:53.551: INFO: Pod "downwardapi-volume-98f4a447-07a3-4d5f-b20c-a35e522f5692": Phase="Pending", Reason="", readiness=false. Elapsed: 28.497787ms Jan 4 15:18:55.557: INFO: Pod "downwardapi-volume-98f4a447-07a3-4d5f-b20c-a35e522f5692": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034119053s Jan 4 15:18:57.569: INFO: Pod "downwardapi-volume-98f4a447-07a3-4d5f-b20c-a35e522f5692": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046089584s Jan 4 15:18:59.574: INFO: Pod "downwardapi-volume-98f4a447-07a3-4d5f-b20c-a35e522f5692": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051278516s Jan 4 15:19:01.579: INFO: Pod "downwardapi-volume-98f4a447-07a3-4d5f-b20c-a35e522f5692": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05625391s Jan 4 15:19:03.584: INFO: Pod "downwardapi-volume-98f4a447-07a3-4d5f-b20c-a35e522f5692": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061252722s STEP: Saw pod success Jan 4 15:19:03.584: INFO: Pod "downwardapi-volume-98f4a447-07a3-4d5f-b20c-a35e522f5692" satisfied condition "success or failure" Jan 4 15:19:03.587: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-98f4a447-07a3-4d5f-b20c-a35e522f5692 container client-container: STEP: delete the pod Jan 4 15:19:03.651: INFO: Waiting for pod downwardapi-volume-98f4a447-07a3-4d5f-b20c-a35e522f5692 to disappear Jan 4 15:19:03.656: INFO: Pod downwardapi-volume-98f4a447-07a3-4d5f-b20c-a35e522f5692 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:19:03.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6047" for this suite. • [SLOW TEST:10.316 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3347,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:19:03.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:19:17.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7332" for this suite. • [SLOW TEST:13.371 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":215,"skipped":3353,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:19:17.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 4 15:19:17.439: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:19:42.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2257" for this suite. • [SLOW TEST:25.297 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3358,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:19:42.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 4 15:19:42.518: INFO: Waiting up to 5m0s for pod "pod-e1df9b9a-f8dd-4213-bce4-7611a6015ec4" in namespace "emptydir-802" to be "success or failure" Jan 4 15:19:42.933: INFO: Pod "pod-e1df9b9a-f8dd-4213-bce4-7611a6015ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 414.413648ms Jan 4 15:19:44.936: INFO: Pod "pod-e1df9b9a-f8dd-4213-bce4-7611a6015ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.417786189s Jan 4 15:19:46.952: INFO: Pod "pod-e1df9b9a-f8dd-4213-bce4-7611a6015ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433251862s Jan 4 15:19:48.957: INFO: Pod "pod-e1df9b9a-f8dd-4213-bce4-7611a6015ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438032963s Jan 4 15:19:51.339: INFO: Pod "pod-e1df9b9a-f8dd-4213-bce4-7611a6015ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.820512612s Jan 4 15:19:53.344: INFO: Pod "pod-e1df9b9a-f8dd-4213-bce4-7611a6015ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.825027879s Jan 4 15:19:55.349: INFO: Pod "pod-e1df9b9a-f8dd-4213-bce4-7611a6015ec4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.83016725s STEP: Saw pod success Jan 4 15:19:55.349: INFO: Pod "pod-e1df9b9a-f8dd-4213-bce4-7611a6015ec4" satisfied condition "success or failure" Jan 4 15:19:55.352: INFO: Trying to get logs from node jerma-node pod pod-e1df9b9a-f8dd-4213-bce4-7611a6015ec4 container test-container: STEP: delete the pod Jan 4 15:19:55.394: INFO: Waiting for pod pod-e1df9b9a-f8dd-4213-bce4-7611a6015ec4 to disappear Jan 4 15:19:55.414: INFO: Pod pod-e1df9b9a-f8dd-4213-bce4-7611a6015ec4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:19:55.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-802" for this suite. • [SLOW TEST:13.112 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3359,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:19:55.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 4 15:19:55.686: INFO: Waiting up to 5m0s for pod "downward-api-aad557d3-8616-450e-841f-701617f63559" in namespace "downward-api-763" to be "success or failure" Jan 4 15:19:55.695: INFO: Pod "downward-api-aad557d3-8616-450e-841f-701617f63559": Phase="Pending", Reason="", readiness=false. Elapsed: 9.081346ms Jan 4 15:19:57.703: INFO: Pod "downward-api-aad557d3-8616-450e-841f-701617f63559": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017053819s Jan 4 15:19:59.812: INFO: Pod "downward-api-aad557d3-8616-450e-841f-701617f63559": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125505961s Jan 4 15:20:01.819: INFO: Pod "downward-api-aad557d3-8616-450e-841f-701617f63559": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132538095s Jan 4 15:20:03.834: INFO: Pod "downward-api-aad557d3-8616-450e-841f-701617f63559": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147849983s Jan 4 15:20:05.840: INFO: Pod "downward-api-aad557d3-8616-450e-841f-701617f63559": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.154082161s STEP: Saw pod success Jan 4 15:20:05.840: INFO: Pod "downward-api-aad557d3-8616-450e-841f-701617f63559" satisfied condition "success or failure" Jan 4 15:20:05.843: INFO: Trying to get logs from node jerma-node pod downward-api-aad557d3-8616-450e-841f-701617f63559 container dapi-container: STEP: delete the pod Jan 4 15:20:05.884: INFO: Waiting for pod downward-api-aad557d3-8616-450e-841f-701617f63559 to disappear Jan 4 15:20:06.005: INFO: Pod downward-api-aad557d3-8616-450e-841f-701617f63559 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:20:06.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-763" for this suite. • [SLOW TEST:10.567 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3376,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:20:06.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jan 4 15:20:06.407: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:20:19.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1503" for this suite. • [SLOW TEST:13.337 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":219,"skipped":3391,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:20:19.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jan 4 15:20:19.438: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:20:33.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3745" for this suite. • [SLOW TEST:14.041 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":220,"skipped":3417,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:20:33.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:20:44.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7026" for this suite. • [SLOW TEST:11.205 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":221,"skipped":3428,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:20:44.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Jan 4 15:20:44.721: INFO: Waiting up to 5m0s for pod "client-containers-14513490-6fa1-4740-b25a-17e5cb02991c" in namespace "containers-3748" to be "success or failure" Jan 4 15:20:44.728: INFO: Pod "client-containers-14513490-6fa1-4740-b25a-17e5cb02991c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.256299ms Jan 4 15:20:46.753: INFO: Pod "client-containers-14513490-6fa1-4740-b25a-17e5cb02991c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031722105s Jan 4 15:20:48.756: INFO: Pod "client-containers-14513490-6fa1-4740-b25a-17e5cb02991c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03560226s Jan 4 15:20:50.763: INFO: Pod "client-containers-14513490-6fa1-4740-b25a-17e5cb02991c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042440235s Jan 4 15:20:52.768: INFO: Pod "client-containers-14513490-6fa1-4740-b25a-17e5cb02991c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047540703s STEP: Saw pod success Jan 4 15:20:52.769: INFO: Pod "client-containers-14513490-6fa1-4740-b25a-17e5cb02991c" satisfied condition "success or failure" Jan 4 15:20:52.773: INFO: Trying to get logs from node jerma-node pod client-containers-14513490-6fa1-4740-b25a-17e5cb02991c container test-container: STEP: delete the pod Jan 4 15:20:52.919: INFO: Waiting for pod client-containers-14513490-6fa1-4740-b25a-17e5cb02991c to disappear Jan 4 15:20:52.926: INFO: Pod client-containers-14513490-6fa1-4740-b25a-17e5cb02991c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:20:52.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3748" for this suite. • [SLOW TEST:8.333 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3497,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:20:52.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-e5c8b6f8-a551-4853-8e85-487a7a6dcc30 in namespace container-probe-3973 Jan 4 15:21:01.596: INFO: Started pod busybox-e5c8b6f8-a551-4853-8e85-487a7a6dcc30 in namespace container-probe-3973 STEP: checking the pod's current state and verifying that restartCount is present Jan 4 15:21:01.598: INFO: Initial restart count of pod busybox-e5c8b6f8-a551-4853-8e85-487a7a6dcc30 is 0 Jan 4 15:21:49.776: INFO: Restart count of pod container-probe-3973/busybox-e5c8b6f8-a551-4853-8e85-487a7a6dcc30 is now 1 (48.177499735s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:21:49.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3973" for this suite. • [SLOW TEST:56.981 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3501,"failed":0} S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:21:49.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 15:21:50.103: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-963 I0104 15:21:50.119954 9 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-963, replica count: 1 I0104 15:21:51.170343 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 15:21:52.170591 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 15:21:53.170849 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 15:21:54.171110 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 15:21:55.171374 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 15:21:56.171682 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 15:21:57.172022 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 15:21:58.172378 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 15:21:59.172650 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 15:22:00.172919 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 4 15:22:00.299: INFO: Created: latency-svc-cxlxm Jan 4 15:22:00.305: INFO: Got endpoints: latency-svc-cxlxm [32.218529ms] Jan 4 15:22:00.343: INFO: Created: latency-svc-x47vw Jan 4 15:22:00.344: INFO: Got endpoints: latency-svc-x47vw [39.266642ms] Jan 4 15:22:00.371: INFO: Created: latency-svc-5llbx Jan 4 15:22:00.375: INFO: Got endpoints: latency-svc-5llbx [69.858539ms] Jan 4 15:22:00.458: INFO: Created: latency-svc-p22t6 Jan 4 15:22:00.463: INFO: Got endpoints: latency-svc-p22t6 [157.263858ms] Jan 4 15:22:00.613: INFO: Created: latency-svc-zmvs8 Jan 4 15:22:00.636: INFO: Got endpoints: latency-svc-zmvs8 [330.41974ms] Jan 4 15:22:00.638: INFO: Created: latency-svc-pxvbq Jan 4 15:22:00.639: INFO: Got endpoints: latency-svc-pxvbq [333.782712ms] Jan 4 15:22:00.662: INFO: Created: latency-svc-jlr65 Jan 4 15:22:00.670: INFO: Got endpoints: latency-svc-jlr65 [365.032222ms] Jan 4 15:22:00.686: INFO: Created: latency-svc-7qnw6 Jan 4 15:22:00.692: INFO: Got endpoints: latency-svc-7qnw6 [387.002555ms] Jan 4 15:22:00.709: INFO: Created: latency-svc-x9bsf Jan 4 15:22:00.743: INFO: Got endpoints: latency-svc-x9bsf [437.648543ms] Jan 4 15:22:00.754: INFO: Created: latency-svc-gt79t Jan 4 15:22:00.781: INFO: Created: latency-svc-wpbmc Jan 4 15:22:00.782: INFO: Got endpoints: latency-svc-gt79t [476.512193ms] Jan 4 15:22:00.787: INFO: Got endpoints: latency-svc-wpbmc [482.073191ms] Jan 4 15:22:00.802: INFO: Created: latency-svc-m86p2 Jan 4 15:22:00.808: INFO: Got endpoints: latency-svc-m86p2 [502.822262ms] Jan 4 15:22:00.902: INFO: Created: latency-svc-4xvb9 Jan 4 15:22:00.909: INFO: Got endpoints: latency-svc-4xvb9 [603.600939ms] Jan 4 15:22:00.956: INFO: Created: latency-svc-g5kk7 Jan 4 15:22:00.958: INFO: Got endpoints: latency-svc-g5kk7 [652.816942ms] Jan 4 15:22:00.985: INFO: Created: latency-svc-h456f Jan 4 15:22:00.986: INFO: Got endpoints: latency-svc-h456f [680.333613ms] Jan 4 15:22:01.029: INFO: Created: latency-svc-kczdt Jan 4 15:22:01.030: INFO: Got endpoints: latency-svc-kczdt [724.448006ms] Jan 4 15:22:01.102: INFO: Created: latency-svc-jvq7t Jan 4 15:22:01.178: INFO: Got endpoints: latency-svc-jvq7t [833.251126ms] Jan 4 15:22:01.186: INFO: Created: latency-svc-hmqrt Jan 4 15:22:01.198: INFO: Got endpoints: latency-svc-hmqrt [823.318055ms] Jan 4 15:22:01.257: INFO: Created: latency-svc-x75z9 Jan 4 15:22:01.261: INFO: Got endpoints: latency-svc-x75z9 [798.776036ms] Jan 4 15:22:01.432: INFO: Created: latency-svc-vzs6n Jan 4 15:22:01.437: INFO: Got endpoints: latency-svc-vzs6n [801.202889ms] Jan 4 15:22:01.460: INFO: Created: latency-svc-mgq2c Jan 4 15:22:01.489: INFO: Got endpoints: latency-svc-mgq2c [850.076993ms] Jan 4 15:22:01.514: INFO: Created: latency-svc-49txr Jan 4 15:22:01.522: INFO: Got endpoints: latency-svc-49txr [851.207942ms] Jan 4 15:22:01.591: INFO: Created: latency-svc-gv554 Jan 4 15:22:01.593: INFO: Got endpoints: latency-svc-gv554 [900.319259ms] Jan 4 15:22:01.631: INFO: Created: latency-svc-27xsn Jan 4 15:22:02.779: INFO: Got endpoints: latency-svc-27xsn [2.036261465s] Jan 4 15:22:02.853: INFO: Created: latency-svc-k4mxp Jan 4 15:22:02.862: INFO: Got endpoints: latency-svc-k4mxp [2.080289678s] Jan 4 15:22:02.974: INFO: Created: latency-svc-nrxln Jan 4 15:22:03.054: INFO: Got endpoints: latency-svc-nrxln [2.267281868s] Jan 4 15:22:03.400: INFO: Created: latency-svc-bq8qx Jan 4 15:22:03.416: INFO: Got endpoints: latency-svc-bq8qx [2.607933307s] Jan 4 15:22:03.467: INFO: Created: latency-svc-xmhld Jan 4 15:22:03.483: INFO: Got endpoints: latency-svc-xmhld [2.573436235s] Jan 4 15:22:03.581: INFO: Created: latency-svc-v5cgf Jan 4 15:22:03.598: INFO: Got endpoints: latency-svc-v5cgf [2.639633767s] Jan 4 15:22:03.654: INFO: Created: latency-svc-9n9qp Jan 4 15:22:03.659: INFO: Got endpoints: latency-svc-9n9qp [2.672841167s] Jan 4 15:22:03.731: INFO: Created: latency-svc-tkprt Jan 4 15:22:03.735: INFO: Got endpoints: latency-svc-tkprt [2.704806055s] Jan 4 15:22:03.821: INFO: Created: latency-svc-7wvpk Jan 4 15:22:03.905: INFO: Created: latency-svc-grqrv Jan 4 15:22:03.905: INFO: Got endpoints: latency-svc-grqrv [2.707011577s] Jan 4 15:22:03.906: INFO: Got endpoints: latency-svc-7wvpk [2.727823409s] Jan 4 15:22:04.104: INFO: Created: latency-svc-tb8qb Jan 4 15:22:04.110: INFO: Got endpoints: latency-svc-tb8qb [2.84876551s] Jan 4 15:22:04.202: INFO: Created: latency-svc-brgfv Jan 4 15:22:04.249: INFO: Got endpoints: latency-svc-brgfv [2.812343284s] Jan 4 15:22:04.319: INFO: Created: latency-svc-g5pkg Jan 4 15:22:04.332: INFO: Got endpoints: latency-svc-g5pkg [2.842336849s] Jan 4 15:22:04.402: INFO: Created: latency-svc-89pv2 Jan 4 15:22:04.427: INFO: Got endpoints: latency-svc-89pv2 [2.905640923s] Jan 4 15:22:04.428: INFO: Created: latency-svc-m4qmd Jan 4 15:22:04.452: INFO: Got endpoints: latency-svc-m4qmd [2.859476395s] Jan 4 15:22:04.490: INFO: Created: latency-svc-8x24s Jan 4 15:22:04.630: INFO: Got endpoints: latency-svc-8x24s [1.85058273s] Jan 4 15:22:04.634: INFO: Created: latency-svc-dltkx Jan 4 15:22:04.653: INFO: Got endpoints: latency-svc-dltkx [1.790749227s] Jan 4 15:22:04.820: INFO: Created: latency-svc-jpplv Jan 4 15:22:04.822: INFO: Got endpoints: latency-svc-jpplv [1.766832803s] Jan 4 15:22:04.873: INFO: Created: latency-svc-vztvc Jan 4 15:22:04.912: INFO: Got endpoints: latency-svc-vztvc [1.496122813s] Jan 4 15:22:04.992: INFO: Created: latency-svc-99dxz Jan 4 15:22:05.013: INFO: Got endpoints: latency-svc-99dxz [1.530220363s] Jan 4 15:22:05.064: INFO: Created: latency-svc-95bxd Jan 4 15:22:05.181: INFO: Got endpoints: latency-svc-95bxd [1.582702927s] Jan 4 15:22:05.192: INFO: Created: latency-svc-msccn Jan 4 15:22:05.198: INFO: Got endpoints: latency-svc-msccn [1.539617397s] Jan 4 15:22:05.254: INFO: Created: latency-svc-l8f4n Jan 4 15:22:05.256: INFO: Got endpoints: latency-svc-l8f4n [1.52080099s] Jan 4 15:22:05.523: INFO: Created: latency-svc-pwfsl Jan 4 15:22:05.552: INFO: Got endpoints: latency-svc-pwfsl [1.646674069s] Jan 4 15:22:05.607: INFO: Created: latency-svc-grcgf Jan 4 15:22:05.613: INFO: Got endpoints: latency-svc-grcgf [1.707750858s] Jan 4 15:22:05.781: INFO: Created: latency-svc-d4dgr Jan 4 15:22:05.811: INFO: Got endpoints: latency-svc-d4dgr [1.700997081s] Jan 4 15:22:05.853: INFO: Created: latency-svc-jxjpt Jan 4 15:22:05.856: INFO: Got endpoints: latency-svc-jxjpt [1.606666957s] Jan 4 15:22:05.942: INFO: Created: latency-svc-cd7rz Jan 4 15:22:05.965: INFO: Got endpoints: latency-svc-cd7rz [1.632969846s] Jan 4 15:22:06.001: INFO: Created: latency-svc-k7k4p Jan 4 15:22:06.019: INFO: Got endpoints: latency-svc-k7k4p [1.591538047s] Jan 4 15:22:06.173: INFO: Created: latency-svc-2d8tw Jan 4 15:22:06.344: INFO: Created: latency-svc-ql469 Jan 4 15:22:06.344: INFO: Got endpoints: latency-svc-2d8tw [1.891659796s] Jan 4 15:22:06.367: INFO: Got endpoints: latency-svc-ql469 [1.737053267s] Jan 4 15:22:06.392: INFO: Created: latency-svc-vfhwb Jan 4 15:22:06.397: INFO: Got endpoints: latency-svc-vfhwb [1.744006918s] Jan 4 15:22:06.418: INFO: Created: latency-svc-252sx Jan 4 15:22:06.426: INFO: Got endpoints: latency-svc-252sx [1.604150841s] Jan 4 15:22:06.478: INFO: Created: latency-svc-zpjpk Jan 4 15:22:06.500: INFO: Got endpoints: latency-svc-zpjpk [1.587693151s] Jan 4 15:22:06.505: INFO: Created: latency-svc-g9v7v Jan 4 15:22:06.520: INFO: Got endpoints: latency-svc-g9v7v [1.50709922s] Jan 4 15:22:06.538: INFO: Created: latency-svc-gsg99 Jan 4 15:22:06.563: INFO: Got endpoints: latency-svc-gsg99 [1.382253681s] Jan 4 15:22:06.565: INFO: Created: latency-svc-44pqw Jan 4 15:22:06.574: INFO: Got endpoints: latency-svc-44pqw [1.375063171s] Jan 4 15:22:06.692: INFO: Created: latency-svc-4vxlm Jan 4 15:22:06.696: INFO: Got endpoints: latency-svc-4vxlm [1.439978283s] Jan 4 15:22:06.710: INFO: Created: latency-svc-rztmd Jan 4 15:22:06.717: INFO: Got endpoints: latency-svc-rztmd [1.164473247s] Jan 4 15:22:06.765: INFO: Created: latency-svc-tw6fl Jan 4 15:22:06.835: INFO: Got endpoints: latency-svc-tw6fl [1.221128725s] Jan 4 15:22:06.835: INFO: Created: latency-svc-86jdf Jan 4 15:22:06.881: INFO: Got endpoints: latency-svc-86jdf [1.069304911s] Jan 4 15:22:06.945: INFO: Created: latency-svc-r4jf9 Jan 4 15:22:06.946: INFO: Got endpoints: latency-svc-r4jf9 [1.089095253s] Jan 4 15:22:07.030: INFO: Created: latency-svc-npcg8 Jan 4 15:22:07.031: INFO: Got endpoints: latency-svc-npcg8 [1.066158406s] Jan 4 15:22:07.058: INFO: Created: latency-svc-rq6kf Jan 4 15:22:07.058: INFO: Got endpoints: latency-svc-rq6kf [1.039179756s] Jan 4 15:22:07.164: INFO: Created: latency-svc-5cj8p Jan 4 15:22:07.200: INFO: Got endpoints: latency-svc-5cj8p [855.366411ms] Jan 4 15:22:07.206: INFO: Created: latency-svc-gdvdh Jan 4 15:22:07.210: INFO: Got endpoints: latency-svc-gdvdh [842.353179ms] Jan 4 15:22:07.256: INFO: Created: latency-svc-d5t99 Jan 4 15:22:07.376: INFO: Got endpoints: latency-svc-d5t99 [979.044921ms] Jan 4 15:22:07.383: INFO: Created: latency-svc-rc89v Jan 4 15:22:07.403: INFO: Got endpoints: latency-svc-rc89v [977.470773ms] Jan 4 15:22:07.423: INFO: Created: latency-svc-l7r9l Jan 4 15:22:07.432: INFO: Got endpoints: latency-svc-l7r9l [931.793164ms] Jan 4 15:22:07.455: INFO: Created: latency-svc-m5fv2 Jan 4 15:22:07.461: INFO: Got endpoints: latency-svc-m5fv2 [940.793374ms] Jan 4 15:22:07.514: INFO: Created: latency-svc-zghgq Jan 4 15:22:07.522: INFO: Got endpoints: latency-svc-zghgq [958.370492ms] Jan 4 15:22:07.739: INFO: Created: latency-svc-krtn6 Jan 4 15:22:07.761: INFO: Got endpoints: latency-svc-krtn6 [1.187391825s] Jan 4 15:22:07.766: INFO: Created: latency-svc-8r4ln Jan 4 15:22:07.772: INFO: Got endpoints: latency-svc-8r4ln [1.076132198s] Jan 4 15:22:07.799: INFO: Created: latency-svc-sjbcn Jan 4 15:22:07.826: INFO: Got endpoints: latency-svc-sjbcn [1.108852356s] Jan 4 15:22:07.879: INFO: Created: latency-svc-m4c78 Jan 4 15:22:07.889: INFO: Got endpoints: latency-svc-m4c78 [1.053699561s] Jan 4 15:22:07.916: INFO: Created: latency-svc-mws4d Jan 4 15:22:07.929: INFO: Got endpoints: latency-svc-mws4d [1.047598708s] Jan 4 15:22:07.954: INFO: Created: latency-svc-9dznm Jan 4 15:22:07.954: INFO: Got endpoints: latency-svc-9dznm [1.008315688s] Jan 4 15:22:08.021: INFO: Created: latency-svc-tr9sm Jan 4 15:22:08.022: INFO: Got endpoints: latency-svc-tr9sm [991.36718ms] Jan 4 15:22:08.100: INFO: Created: latency-svc-s9m7x Jan 4 15:22:08.100: INFO: Got endpoints: latency-svc-s9m7x [1.041912601s] Jan 4 15:22:08.118: INFO: Created: latency-svc-dhv8l Jan 4 15:22:08.175: INFO: Got endpoints: latency-svc-dhv8l [974.566544ms] Jan 4 15:22:08.210: INFO: Created: latency-svc-ndsvp Jan 4 15:22:08.223: INFO: Got endpoints: latency-svc-ndsvp [1.012773681s] Jan 4 15:22:08.245: INFO: Created: latency-svc-525f8 Jan 4 15:22:08.355: INFO: Got endpoints: latency-svc-525f8 [978.281953ms] Jan 4 15:22:08.372: INFO: Created: latency-svc-4sdhg Jan 4 15:22:08.389: INFO: Got endpoints: latency-svc-4sdhg [985.498668ms] Jan 4 15:22:08.540: INFO: Created: latency-svc-cs8dh Jan 4 15:22:08.540: INFO: Got endpoints: latency-svc-cs8dh [1.107885075s] Jan 4 15:22:08.893: INFO: Created: latency-svc-6kldx Jan 4 15:22:08.940: INFO: Got endpoints: latency-svc-6kldx [1.478650225s] Jan 4 15:22:08.974: INFO: Created: latency-svc-4t49h Jan 4 15:22:08.983: INFO: Got endpoints: latency-svc-4t49h [1.461225043s] Jan 4 15:22:09.049: INFO: Created: latency-svc-mb5rc Jan 4 15:22:09.056: INFO: Got endpoints: latency-svc-mb5rc [1.294590657s] Jan 4 15:22:09.079: INFO: Created: latency-svc-wndgb Jan 4 15:22:09.086: INFO: Got endpoints: latency-svc-wndgb [1.31381941s] Jan 4 15:22:09.124: INFO: Created: latency-svc-gt55r Jan 4 15:22:09.140: INFO: Got endpoints: latency-svc-gt55r [1.313849568s] Jan 4 15:22:09.202: INFO: Created: latency-svc-ckj9z Jan 4 15:22:09.206: INFO: Got endpoints: latency-svc-ckj9z [1.317154179s] Jan 4 15:22:09.234: INFO: Created: latency-svc-7n4mg Jan 4 15:22:09.241: INFO: Got endpoints: latency-svc-7n4mg [1.312557528s] Jan 4 15:22:09.456: INFO: Created: latency-svc-ghl79 Jan 4 15:22:09.505: INFO: Created: latency-svc-wcc96 Jan 4 15:22:09.505: INFO: Got endpoints: latency-svc-ghl79 [1.550673429s] Jan 4 15:22:09.641: INFO: Got endpoints: latency-svc-wcc96 [1.618492812s] Jan 4 15:22:09.679: INFO: Created: latency-svc-28jtg Jan 4 15:22:09.688: INFO: Got endpoints: latency-svc-28jtg [1.587998576s] Jan 4 15:22:09.742: INFO: Created: latency-svc-fgf6q Jan 4 15:22:09.836: INFO: Got endpoints: latency-svc-fgf6q [1.661067814s] Jan 4 15:22:09.893: INFO: Created: latency-svc-cgj29 Jan 4 15:22:09.931: INFO: Got endpoints: latency-svc-cgj29 [1.708558442s] Jan 4 15:22:10.073: INFO: Created: latency-svc-krsnl Jan 4 15:22:10.118: INFO: Got endpoints: latency-svc-krsnl [1.762602076s] Jan 4 15:22:10.418: INFO: Created: latency-svc-sw92x Jan 4 15:22:10.678: INFO: Got endpoints: latency-svc-sw92x [2.288685459s] Jan 4 15:22:10.894: INFO: Created: latency-svc-r5lmm Jan 4 15:22:10.916: INFO: Got endpoints: latency-svc-r5lmm [2.375867079s] Jan 4 15:22:10.982: INFO: Created: latency-svc-kv4ks Jan 4 15:22:11.173: INFO: Got endpoints: latency-svc-kv4ks [2.231990676s] Jan 4 15:22:11.224: INFO: Created: latency-svc-z4rx7 Jan 4 15:22:11.232: INFO: Got endpoints: latency-svc-z4rx7 [2.249037957s] Jan 4 15:22:11.366: INFO: Created: latency-svc-jts2t Jan 4 15:22:11.374: INFO: Got endpoints: latency-svc-jts2t [2.318446648s] Jan 4 15:22:11.446: INFO: Created: latency-svc-xm8vs Jan 4 15:22:11.674: INFO: Got endpoints: latency-svc-xm8vs [2.587650006s] Jan 4 15:22:11.713: INFO: Created: latency-svc-klts9 Jan 4 15:22:11.734: INFO: Got endpoints: latency-svc-klts9 [2.59421616s] Jan 4 15:22:11.910: INFO: Created: latency-svc-fbfxd Jan 4 15:22:11.927: INFO: Got endpoints: latency-svc-fbfxd [2.720776474s] Jan 4 15:22:11.967: INFO: Created: latency-svc-2xdsf Jan 4 15:22:11.971: INFO: Got endpoints: latency-svc-2xdsf [2.729746261s] Jan 4 15:22:12.064: INFO: Created: latency-svc-2tg2r Jan 4 15:22:12.079: INFO: Got endpoints: latency-svc-2tg2r [2.574747069s] Jan 4 15:22:12.120: INFO: Created: latency-svc-48gjm Jan 4 15:22:12.144: INFO: Got endpoints: latency-svc-48gjm [2.50239181s] Jan 4 15:22:12.267: INFO: Created: latency-svc-m9bg6 Jan 4 15:22:12.339: INFO: Got endpoints: latency-svc-m9bg6 [2.650697126s] Jan 4 15:22:12.340: INFO: Created: latency-svc-75nlx Jan 4 15:22:12.453: INFO: Got endpoints: latency-svc-75nlx [2.616857698s] Jan 4 15:22:12.478: INFO: Created: latency-svc-vcv2w Jan 4 15:22:12.487: INFO: Got endpoints: latency-svc-vcv2w [2.555521319s] Jan 4 15:22:12.704: INFO: Created: latency-svc-gj6f6 Jan 4 15:22:12.704: INFO: Got endpoints: latency-svc-gj6f6 [2.586319945s] Jan 4 15:22:12.764: INFO: Created: latency-svc-j5c4b Jan 4 15:22:12.768: INFO: Got endpoints: latency-svc-j5c4b [2.089731767s] Jan 4 15:22:12.936: INFO: Created: latency-svc-v8ffd Jan 4 15:22:12.996: INFO: Got endpoints: latency-svc-v8ffd [2.079843795s] Jan 4 15:22:12.997: INFO: Created: latency-svc-dl7zk Jan 4 15:22:13.159: INFO: Got endpoints: latency-svc-dl7zk [391.19178ms] Jan 4 15:22:13.221: INFO: Created: latency-svc-5m68g Jan 4 15:22:13.245: INFO: Got endpoints: latency-svc-5m68g [2.072254918s] Jan 4 15:22:13.583: INFO: Created: latency-svc-bd8g7 Jan 4 15:22:13.599: INFO: Got endpoints: latency-svc-bd8g7 [2.367123356s] Jan 4 15:22:13.606: INFO: Created: latency-svc-qlbhp Jan 4 15:22:13.769: INFO: Created: latency-svc-4zfbx Jan 4 15:22:13.773: INFO: Got endpoints: latency-svc-qlbhp [2.398172195s] Jan 4 15:22:13.799: INFO: Got endpoints: latency-svc-4zfbx [2.124862351s] Jan 4 15:22:13.848: INFO: Created: latency-svc-4nwtp Jan 4 15:22:13.988: INFO: Got endpoints: latency-svc-4nwtp [2.253332441s] Jan 4 15:22:14.006: INFO: Created: latency-svc-9x6mg Jan 4 15:22:14.040: INFO: Got endpoints: latency-svc-9x6mg [2.112633721s] Jan 4 15:22:14.044: INFO: Created: latency-svc-tvjjv Jan 4 15:22:14.133: INFO: Got endpoints: latency-svc-tvjjv [2.162170832s] Jan 4 15:22:14.150: INFO: Created: latency-svc-q98x7 Jan 4 15:22:14.163: INFO: Got endpoints: latency-svc-q98x7 [2.082901792s] Jan 4 15:22:14.188: INFO: Created: latency-svc-ctdl9 Jan 4 15:22:14.192: INFO: Got endpoints: latency-svc-ctdl9 [2.048580294s] Jan 4 15:22:14.212: INFO: Created: latency-svc-brjhj Jan 4 15:22:14.416: INFO: Got endpoints: latency-svc-brjhj [2.076908504s] Jan 4 15:22:14.428: INFO: Created: latency-svc-h6x4h Jan 4 15:22:14.428: INFO: Got endpoints: latency-svc-h6x4h [1.974432366s] Jan 4 15:22:14.454: INFO: Created: latency-svc-p6qhh Jan 4 15:22:14.458: INFO: Got endpoints: latency-svc-p6qhh [1.970364097s] Jan 4 15:22:14.488: INFO: Created: latency-svc-9t8px Jan 4 15:22:14.506: INFO: Got endpoints: latency-svc-9t8px [1.801959673s] Jan 4 15:22:14.611: INFO: Created: latency-svc-mvqcd Jan 4 15:22:14.630: INFO: Got endpoints: latency-svc-mvqcd [1.633363047s] Jan 4 15:22:14.631: INFO: Created: latency-svc-d24h6 Jan 4 15:22:14.631: INFO: Got endpoints: latency-svc-d24h6 [1.471852495s] Jan 4 15:22:14.660: INFO: Created: latency-svc-2s6qj Jan 4 15:22:14.673: INFO: Got endpoints: latency-svc-2s6qj [1.428129081s] Jan 4 15:22:14.781: INFO: Created: latency-svc-c56hd Jan 4 15:22:14.820: INFO: Created: latency-svc-8bs2t Jan 4 15:22:14.820: INFO: Got endpoints: latency-svc-c56hd [1.220438538s] Jan 4 15:22:14.853: INFO: Got endpoints: latency-svc-8bs2t [1.080073229s] Jan 4 15:22:14.853: INFO: Created: latency-svc-gnsmx Jan 4 15:22:15.014: INFO: Created: latency-svc-wjmlj Jan 4 15:22:15.015: INFO: Got endpoints: latency-svc-gnsmx [1.215210424s] Jan 4 15:22:15.035: INFO: Got endpoints: latency-svc-wjmlj [1.04707285s] Jan 4 15:22:15.110: INFO: Created: latency-svc-fkvkv Jan 4 15:22:15.163: INFO: Got endpoints: latency-svc-fkvkv [1.122714026s] Jan 4 15:22:15.171: INFO: Created: latency-svc-c2pvb Jan 4 15:22:15.196: INFO: Got endpoints: latency-svc-c2pvb [1.061811349s] Jan 4 15:22:15.251: INFO: Created: latency-svc-bsrdl Jan 4 15:22:15.253: INFO: Got endpoints: latency-svc-bsrdl [1.089988089s] Jan 4 15:22:15.388: INFO: Created: latency-svc-znnjf Jan 4 15:22:15.389: INFO: Got endpoints: latency-svc-znnjf [1.196523986s] Jan 4 15:22:15.456: INFO: Created: latency-svc-mk7l4 Jan 4 15:22:15.475: INFO: Got endpoints: latency-svc-mk7l4 [1.058853343s] Jan 4 15:22:15.598: INFO: Created: latency-svc-mfnst Jan 4 15:22:15.606: INFO: Got endpoints: latency-svc-mfnst [1.177794958s] Jan 4 15:22:15.667: INFO: Created: latency-svc-p59gz Jan 4 15:22:15.692: INFO: Created: latency-svc-bkxw5 Jan 4 15:22:15.692: INFO: Got endpoints: latency-svc-p59gz [1.234445692s] Jan 4 15:22:15.847: INFO: Got endpoints: latency-svc-bkxw5 [1.340391014s] Jan 4 15:22:15.853: INFO: Created: latency-svc-p9lgm Jan 4 15:22:15.867: INFO: Got endpoints: latency-svc-p9lgm [1.235904647s] Jan 4 15:22:15.885: INFO: Created: latency-svc-z6ljp Jan 4 15:22:15.890: INFO: Got endpoints: latency-svc-z6ljp [1.259702599s] Jan 4 15:22:16.099: INFO: Created: latency-svc-m7btc Jan 4 15:22:16.105: INFO: Got endpoints: latency-svc-m7btc [1.431131516s] Jan 4 15:22:16.193: INFO: Created: latency-svc-st2dx Jan 4 15:22:16.384: INFO: Got endpoints: latency-svc-st2dx [1.564175179s] Jan 4 15:22:16.386: INFO: Created: latency-svc-dhnsx Jan 4 15:22:16.408: INFO: Got endpoints: latency-svc-dhnsx [1.555233276s] Jan 4 15:22:16.436: INFO: Created: latency-svc-4vc8k Jan 4 15:22:16.454: INFO: Got endpoints: latency-svc-4vc8k [1.439475934s] Jan 4 15:22:16.487: INFO: Created: latency-svc-zts2r Jan 4 15:22:16.619: INFO: Created: latency-svc-9hp8j Jan 4 15:22:16.624: INFO: Got endpoints: latency-svc-zts2r [1.588590306s] Jan 4 15:22:16.636: INFO: Got endpoints: latency-svc-9hp8j [1.472488009s] Jan 4 15:22:16.670: INFO: Created: latency-svc-pll28 Jan 4 15:22:16.682: INFO: Got endpoints: latency-svc-pll28 [1.48680396s] Jan 4 15:22:16.933: INFO: Created: latency-svc-fpd2x Jan 4 15:22:16.935: INFO: Got endpoints: latency-svc-fpd2x [1.682051331s] Jan 4 15:22:17.291: INFO: Created: latency-svc-b6ll7 Jan 4 15:22:17.294: INFO: Got endpoints: latency-svc-b6ll7 [1.905014802s] Jan 4 15:22:17.373: INFO: Created: latency-svc-dvrfr Jan 4 15:22:17.385: INFO: Got endpoints: latency-svc-dvrfr [1.909711461s] Jan 4 15:22:17.734: INFO: Created: latency-svc-v5jtj Jan 4 15:22:17.740: INFO: Got endpoints: latency-svc-v5jtj [2.133693895s] Jan 4 15:22:17.802: INFO: Created: latency-svc-6gtpr Jan 4 15:22:17.963: INFO: Got endpoints: latency-svc-6gtpr [2.271165965s] Jan 4 15:22:17.986: INFO: Created: latency-svc-sctw9 Jan 4 15:22:17.991: INFO: Got endpoints: latency-svc-sctw9 [2.144124147s] Jan 4 15:22:18.145: INFO: Created: latency-svc-8t647 Jan 4 15:22:18.148: INFO: Got endpoints: latency-svc-8t647 [2.28059425s] Jan 4 15:22:18.216: INFO: Created: latency-svc-hzqf2 Jan 4 15:22:18.227: INFO: Got endpoints: latency-svc-hzqf2 [2.337131438s] Jan 4 15:22:18.380: INFO: Created: latency-svc-2psjk Jan 4 15:22:18.383: INFO: Got endpoints: latency-svc-2psjk [2.278705605s] Jan 4 15:22:18.414: INFO: Created: latency-svc-cf765 Jan 4 15:22:18.437: INFO: Got endpoints: latency-svc-cf765 [2.052245304s] Jan 4 15:22:18.439: INFO: Created: latency-svc-mqvm6 Jan 4 15:22:18.447: INFO: Got endpoints: latency-svc-mqvm6 [2.03778708s] Jan 4 15:22:18.562: INFO: Created: latency-svc-mmnlg Jan 4 15:22:18.571: INFO: Got endpoints: latency-svc-mmnlg [2.116421841s] Jan 4 15:22:18.599: INFO: Created: latency-svc-582t7 Jan 4 15:22:18.612: INFO: Got endpoints: latency-svc-582t7 [1.987474665s] Jan 4 15:22:18.899: INFO: Created: latency-svc-xdfwz Jan 4 15:22:18.933: INFO: Got endpoints: latency-svc-xdfwz [2.297436147s] Jan 4 15:22:18.941: INFO: Created: latency-svc-4l48d Jan 4 15:22:18.955: INFO: Got endpoints: latency-svc-4l48d [2.272887135s] Jan 4 15:22:18.959: INFO: Created: latency-svc-bjzmf Jan 4 15:22:18.961: INFO: Got endpoints: latency-svc-bjzmf [2.025799793s] Jan 4 15:22:18.995: INFO: Created: latency-svc-n6jx2 Jan 4 15:22:19.067: INFO: Got endpoints: latency-svc-n6jx2 [1.77355712s] Jan 4 15:22:19.073: INFO: Created: latency-svc-5fzrz Jan 4 15:22:19.087: INFO: Got endpoints: latency-svc-5fzrz [1.701618484s] Jan 4 15:22:19.106: INFO: Created: latency-svc-pggzj Jan 4 15:22:19.108: INFO: Got endpoints: latency-svc-pggzj [1.368265082s] Jan 4 15:22:19.155: INFO: Created: latency-svc-7rmm5 Jan 4 15:22:19.432: INFO: Got endpoints: latency-svc-7rmm5 [1.468351815s] Jan 4 15:22:19.451: INFO: Created: latency-svc-vl8fh Jan 4 15:22:19.484: INFO: Created: latency-svc-k9bqs Jan 4 15:22:19.485: INFO: Got endpoints: latency-svc-vl8fh [1.493981567s] Jan 4 15:22:19.697: INFO: Created: latency-svc-lf2ds Jan 4 15:22:19.697: INFO: Got endpoints: latency-svc-k9bqs [1.549436447s] Jan 4 15:22:19.701: INFO: Got endpoints: latency-svc-lf2ds [1.474224213s] Jan 4 15:22:19.722: INFO: Created: latency-svc-c8ncx Jan 4 15:22:19.726: INFO: Got endpoints: latency-svc-c8ncx [1.34229133s] Jan 4 15:22:19.737: INFO: Created: latency-svc-ncj8d Jan 4 15:22:19.764: INFO: Created: latency-svc-74n9s Jan 4 15:22:19.764: INFO: Got endpoints: latency-svc-ncj8d [1.326908064s] Jan 4 15:22:19.785: INFO: Got endpoints: latency-svc-74n9s [1.338095428s] Jan 4 15:22:19.790: INFO: Created: latency-svc-j6z7f Jan 4 15:22:19.911: INFO: Got endpoints: latency-svc-j6z7f [1.340338459s] Jan 4 15:22:19.966: INFO: Created: latency-svc-r9tjb Jan 4 15:22:19.974: INFO: Created: latency-svc-xdwst Jan 4 15:22:20.130: INFO: Got endpoints: latency-svc-r9tjb [1.517965016s] Jan 4 15:22:20.135: INFO: Created: latency-svc-vx28x Jan 4 15:22:20.152: INFO: Got endpoints: latency-svc-xdwst [1.218924525s] Jan 4 15:22:20.160: INFO: Got endpoints: latency-svc-vx28x [1.204437552s] Jan 4 15:22:20.275: INFO: Created: latency-svc-hlpz4 Jan 4 15:22:20.288: INFO: Got endpoints: latency-svc-hlpz4 [1.327550418s] Jan 4 15:22:20.314: INFO: Created: latency-svc-bmdnj Jan 4 15:22:20.331: INFO: Got endpoints: latency-svc-bmdnj [1.262940847s] Jan 4 15:22:20.352: INFO: Created: latency-svc-m5kq2 Jan 4 15:22:20.370: INFO: Created: latency-svc-4cljf Jan 4 15:22:20.371: INFO: Got endpoints: latency-svc-m5kq2 [1.284006966s] Jan 4 15:22:20.462: INFO: Created: latency-svc-4rhfg Jan 4 15:22:20.465: INFO: Got endpoints: latency-svc-4cljf [1.356776234s] Jan 4 15:22:20.474: INFO: Got endpoints: latency-svc-4rhfg [1.041687961s] Jan 4 15:22:20.503: INFO: Created: latency-svc-zhz7t Jan 4 15:22:20.504: INFO: Got endpoints: latency-svc-zhz7t [1.018368343s] Jan 4 15:22:20.533: INFO: Created: latency-svc-s7fhj Jan 4 15:22:20.559: INFO: Created: latency-svc-br5lj Jan 4 15:22:20.560: INFO: Got endpoints: latency-svc-s7fhj [862.896286ms] Jan 4 15:22:20.665: INFO: Got endpoints: latency-svc-br5lj [963.862939ms] Jan 4 15:22:20.683: INFO: Created: latency-svc-bhwqj Jan 4 15:22:20.688: INFO: Got endpoints: latency-svc-bhwqj [962.322893ms] Jan 4 15:22:20.709: INFO: Created: latency-svc-hppfg Jan 4 15:22:20.711: INFO: Got endpoints: latency-svc-hppfg [946.983879ms] Jan 4 15:22:20.763: INFO: Created: latency-svc-nrfkc Jan 4 15:22:20.840: INFO: Got endpoints: latency-svc-nrfkc [1.055396649s] Jan 4 15:22:20.849: INFO: Created: latency-svc-2pz5b Jan 4 15:22:20.855: INFO: Got endpoints: latency-svc-2pz5b [943.498776ms] Jan 4 15:22:20.889: INFO: Created: latency-svc-rgrgr Jan 4 15:22:21.025: INFO: Got endpoints: latency-svc-rgrgr [895.183593ms] Jan 4 15:22:21.028: INFO: Created: latency-svc-858gh Jan 4 15:22:21.074: INFO: Got endpoints: latency-svc-858gh [921.577069ms] Jan 4 15:22:21.081: INFO: Created: latency-svc-s9bls Jan 4 15:22:21.084: INFO: Got endpoints: latency-svc-s9bls [923.964661ms] Jan 4 15:22:21.285: INFO: Created: latency-svc-fxmlk Jan 4 15:22:21.286: INFO: Got endpoints: latency-svc-fxmlk [996.953832ms] Jan 4 15:22:21.286: INFO: Latencies: [39.266642ms 69.858539ms 157.263858ms 330.41974ms 333.782712ms 365.032222ms 387.002555ms 391.19178ms 437.648543ms 476.512193ms 482.073191ms 502.822262ms 603.600939ms 652.816942ms 680.333613ms 724.448006ms 798.776036ms 801.202889ms 823.318055ms 833.251126ms 842.353179ms 850.076993ms 851.207942ms 855.366411ms 862.896286ms 895.183593ms 900.319259ms 921.577069ms 923.964661ms 931.793164ms 940.793374ms 943.498776ms 946.983879ms 958.370492ms 962.322893ms 963.862939ms 974.566544ms 977.470773ms 978.281953ms 979.044921ms 985.498668ms 991.36718ms 996.953832ms 1.008315688s 1.012773681s 1.018368343s 1.039179756s 1.041687961s 1.041912601s 1.04707285s 1.047598708s 1.053699561s 1.055396649s 1.058853343s 1.061811349s 1.066158406s 1.069304911s 1.076132198s 1.080073229s 1.089095253s 1.089988089s 1.107885075s 1.108852356s 1.122714026s 1.164473247s 1.177794958s 1.187391825s 1.196523986s 1.204437552s 1.215210424s 1.218924525s 1.220438538s 1.221128725s 1.234445692s 1.235904647s 1.259702599s 1.262940847s 1.284006966s 1.294590657s 1.312557528s 1.31381941s 1.313849568s 1.317154179s 1.326908064s 1.327550418s 1.338095428s 1.340338459s 1.340391014s 1.34229133s 1.356776234s 1.368265082s 1.375063171s 1.382253681s 1.428129081s 1.431131516s 1.439475934s 1.439978283s 1.461225043s 1.468351815s 1.471852495s 1.472488009s 1.474224213s 1.478650225s 1.48680396s 1.493981567s 1.496122813s 1.50709922s 1.517965016s 1.52080099s 1.530220363s 1.539617397s 1.549436447s 1.550673429s 1.555233276s 1.564175179s 1.582702927s 1.587693151s 1.587998576s 1.588590306s 1.591538047s 1.604150841s 1.606666957s 1.618492812s 1.632969846s 1.633363047s 1.646674069s 1.661067814s 1.682051331s 1.700997081s 1.701618484s 1.707750858s 1.708558442s 1.737053267s 1.744006918s 1.762602076s 1.766832803s 1.77355712s 1.790749227s 1.801959673s 1.85058273s 1.891659796s 1.905014802s 1.909711461s 1.970364097s 1.974432366s 1.987474665s 2.025799793s 2.036261465s 2.03778708s 2.048580294s 2.052245304s 2.072254918s 2.076908504s 2.079843795s 2.080289678s 2.082901792s 2.089731767s 2.112633721s 2.116421841s 2.124862351s 2.133693895s 2.144124147s 2.162170832s 2.231990676s 2.249037957s 2.253332441s 2.267281868s 2.271165965s 2.272887135s 2.278705605s 2.28059425s 2.288685459s 2.297436147s 2.318446648s 2.337131438s 2.367123356s 2.375867079s 2.398172195s 2.50239181s 2.555521319s 2.573436235s 2.574747069s 2.586319945s 2.587650006s 2.59421616s 2.607933307s 2.616857698s 2.639633767s 2.650697126s 2.672841167s 2.704806055s 2.707011577s 2.720776474s 2.727823409s 2.729746261s 2.812343284s 2.842336849s 2.84876551s 2.859476395s 2.905640923s] Jan 4 15:22:21.286: INFO: 50 %ile: 1.472488009s Jan 4 15:22:21.286: INFO: 90 %ile: 2.573436235s Jan 4 15:22:21.286: INFO: 99 %ile: 2.859476395s Jan 4 15:22:21.286: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:22:21.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-963" for this suite. • [SLOW TEST:31.397 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":224,"skipped":3502,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:22:21.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:22:21.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-7519" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":225,"skipped":3555,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:22:21.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 4 15:22:29.879: INFO: 10 pods remaining Jan 4 15:22:29.879: INFO: 10 pods has nil DeletionTimestamp Jan 4 15:22:29.879: INFO: Jan 4 15:22:30.503: INFO: 1 pods remaining Jan 4 15:22:30.504: INFO: 0 pods has nil DeletionTimestamp Jan 4 15:22:30.504: INFO: STEP: Gathering metrics W0104 15:22:31.507535 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 4 15:22:31.507: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:22:31.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5327" for this suite. • [SLOW TEST:10.329 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":226,"skipped":3556,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:22:31.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 4 15:22:32.901: INFO: Waiting up to 5m0s for pod "downward-api-cfd24307-a151-4439-be71-c58f1f3c4b42" in namespace "downward-api-1037" to be "success or failure" Jan 4 15:22:33.036: INFO: Pod "downward-api-cfd24307-a151-4439-be71-c58f1f3c4b42": Phase="Pending", Reason="", readiness=false. Elapsed: 135.028834ms Jan 4 15:22:36.517: INFO: Pod "downward-api-cfd24307-a151-4439-be71-c58f1f3c4b42": Phase="Pending", Reason="", readiness=false. Elapsed: 3.615269275s Jan 4 15:22:38.565: INFO: Pod "downward-api-cfd24307-a151-4439-be71-c58f1f3c4b42": Phase="Pending", Reason="", readiness=false. Elapsed: 5.663985746s Jan 4 15:22:41.134: INFO: Pod "downward-api-cfd24307-a151-4439-be71-c58f1f3c4b42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.232312207s Jan 4 15:22:43.161: INFO: Pod "downward-api-cfd24307-a151-4439-be71-c58f1f3c4b42": Phase="Pending", Reason="", readiness=false. Elapsed: 10.25922253s Jan 4 15:22:45.237: INFO: Pod "downward-api-cfd24307-a151-4439-be71-c58f1f3c4b42": Phase="Pending", Reason="", readiness=false. Elapsed: 12.335105745s Jan 4 15:22:47.277: INFO: Pod "downward-api-cfd24307-a151-4439-be71-c58f1f3c4b42": Phase="Pending", Reason="", readiness=false. Elapsed: 14.375833064s Jan 4 15:22:49.438: INFO: Pod "downward-api-cfd24307-a151-4439-be71-c58f1f3c4b42": Phase="Pending", Reason="", readiness=false. Elapsed: 16.53695392s Jan 4 15:22:51.563: INFO: Pod "downward-api-cfd24307-a151-4439-be71-c58f1f3c4b42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.661848112s STEP: Saw pod success Jan 4 15:22:51.563: INFO: Pod "downward-api-cfd24307-a151-4439-be71-c58f1f3c4b42" satisfied condition "success or failure" Jan 4 15:22:51.596: INFO: Trying to get logs from node jerma-node pod downward-api-cfd24307-a151-4439-be71-c58f1f3c4b42 container dapi-container: STEP: delete the pod Jan 4 15:22:51.779: INFO: Waiting for pod downward-api-cfd24307-a151-4439-be71-c58f1f3c4b42 to disappear Jan 4 15:22:51.823: INFO: Pod downward-api-cfd24307-a151-4439-be71-c58f1f3c4b42 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:22:51.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1037" for this suite. • [SLOW TEST:20.140 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3564,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:22:51.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 4 15:23:03.030: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:23:03.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3856" for this suite. • [SLOW TEST:11.336 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3594,"failed":0} [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:23:03.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 4 15:23:03.669: INFO: Waiting up to 5m0s for pod "pod-7a08d103-8cb4-41b8-bccc-b48b77d8ca63" in namespace "emptydir-4647" to be "success or failure" Jan 4 15:23:03.702: INFO: Pod "pod-7a08d103-8cb4-41b8-bccc-b48b77d8ca63": Phase="Pending", Reason="", readiness=false. Elapsed: 33.456731ms Jan 4 15:23:05.774: INFO: Pod "pod-7a08d103-8cb4-41b8-bccc-b48b77d8ca63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105430464s Jan 4 15:23:07.888: INFO: Pod "pod-7a08d103-8cb4-41b8-bccc-b48b77d8ca63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218750652s Jan 4 15:23:09.961: INFO: Pod "pod-7a08d103-8cb4-41b8-bccc-b48b77d8ca63": Phase="Pending", Reason="", readiness=false. Elapsed: 6.291731889s Jan 4 15:23:12.163: INFO: Pod "pod-7a08d103-8cb4-41b8-bccc-b48b77d8ca63": Phase="Pending", Reason="", readiness=false. Elapsed: 8.494295082s Jan 4 15:23:14.290: INFO: Pod "pod-7a08d103-8cb4-41b8-bccc-b48b77d8ca63": Phase="Pending", Reason="", readiness=false. Elapsed: 10.620904347s Jan 4 15:23:16.295: INFO: Pod "pod-7a08d103-8cb4-41b8-bccc-b48b77d8ca63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.625979613s STEP: Saw pod success Jan 4 15:23:16.295: INFO: Pod "pod-7a08d103-8cb4-41b8-bccc-b48b77d8ca63" satisfied condition "success or failure" Jan 4 15:23:16.297: INFO: Trying to get logs from node jerma-node pod pod-7a08d103-8cb4-41b8-bccc-b48b77d8ca63 container test-container: STEP: delete the pod Jan 4 15:23:16.710: INFO: Waiting for pod pod-7a08d103-8cb4-41b8-bccc-b48b77d8ca63 to disappear Jan 4 15:23:16.714: INFO: Pod pod-7a08d103-8cb4-41b8-bccc-b48b77d8ca63 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:23:16.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4647" for this suite. • [SLOW TEST:13.401 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3594,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:23:16.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-7339 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7339 to expose endpoints map[] Jan 4 15:23:16.898: INFO: Get endpoints failed (14.771297ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jan 4 15:23:17.903: INFO: successfully validated that service endpoint-test2 in namespace services-7339 exposes endpoints map[] (1.019140142s elapsed) STEP: Creating pod pod1 in namespace services-7339 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7339 to expose endpoints map[pod1:[80]] Jan 4 15:23:22.207: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.297294945s elapsed, will retry) Jan 4 15:23:24.285: INFO: successfully validated that service endpoint-test2 in namespace services-7339 exposes endpoints map[pod1:[80]] (6.375723511s elapsed) STEP: Creating pod pod2 in namespace services-7339 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7339 to expose endpoints map[pod1:[80] pod2:[80]] Jan 4 15:23:28.562: INFO: Unexpected endpoints: found map[40206749-cc08-4acf-96d6-6909ccd99732:[80]], expected map[pod1:[80] pod2:[80]] (4.273669098s elapsed, will retry) Jan 4 15:23:30.579: INFO: successfully validated that service endpoint-test2 in namespace services-7339 exposes endpoints map[pod1:[80] pod2:[80]] (6.290787422s elapsed) STEP: Deleting pod pod1 in namespace services-7339 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7339 to expose endpoints map[pod2:[80]] Jan 4 15:23:30.622: INFO: successfully validated that service endpoint-test2 in namespace services-7339 exposes endpoints map[pod2:[80]] (32.690308ms elapsed) STEP: Deleting pod pod2 in namespace services-7339 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7339 to expose endpoints map[] Jan 4 15:23:30.673: INFO: successfully validated that service endpoint-test2 in namespace services-7339 exposes endpoints map[] (9.746214ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:23:30.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7339" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:13.993 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":230,"skipped":3607,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:23:30.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 4 15:23:32.061: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 4 15:23:34.288: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748213, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748213, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748213, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748212, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 15:23:36.325: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748213, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748213, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748213, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748212, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 15:23:38.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748213, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748213, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748213, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748212, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 15:23:40.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748213, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748213, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748213, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748212, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 15:23:42.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748213, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748213, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748213, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748212, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 4 15:23:45.436: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 15:23:45.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5984-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:23:46.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7455" for this suite. STEP: Destroying namespace "webhook-7455-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.241 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":231,"skipped":3639,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:23:46.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 4 15:23:47.265: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db2c95f1-0b20-4363-ab4a-b159cc51a1a6" in namespace "downward-api-2307" to be "success or failure" Jan 4 15:23:47.960: INFO: Pod "downwardapi-volume-db2c95f1-0b20-4363-ab4a-b159cc51a1a6": Phase="Pending", Reason="", readiness=false. Elapsed: 695.304755ms Jan 4 15:23:49.992: INFO: Pod "downwardapi-volume-db2c95f1-0b20-4363-ab4a-b159cc51a1a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.727153144s Jan 4 15:23:52.024: INFO: Pod "downwardapi-volume-db2c95f1-0b20-4363-ab4a-b159cc51a1a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.758806555s Jan 4 15:23:54.029: INFO: Pod "downwardapi-volume-db2c95f1-0b20-4363-ab4a-b159cc51a1a6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.764319002s Jan 4 15:23:56.034: INFO: Pod "downwardapi-volume-db2c95f1-0b20-4363-ab4a-b159cc51a1a6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.769146839s Jan 4 15:23:58.040: INFO: Pod "downwardapi-volume-db2c95f1-0b20-4363-ab4a-b159cc51a1a6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.775007272s Jan 4 15:24:00.047: INFO: Pod "downwardapi-volume-db2c95f1-0b20-4363-ab4a-b159cc51a1a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.781951177s STEP: Saw pod success Jan 4 15:24:00.047: INFO: Pod "downwardapi-volume-db2c95f1-0b20-4363-ab4a-b159cc51a1a6" satisfied condition "success or failure" Jan 4 15:24:00.052: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-db2c95f1-0b20-4363-ab4a-b159cc51a1a6 container client-container: STEP: delete the pod Jan 4 15:24:00.098: INFO: Waiting for pod downwardapi-volume-db2c95f1-0b20-4363-ab4a-b159cc51a1a6 to disappear Jan 4 15:24:00.169: INFO: Pod downwardapi-volume-db2c95f1-0b20-4363-ab4a-b159cc51a1a6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:24:00.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2307" for this suite. • [SLOW TEST:13.222 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3652,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:24:00.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 4 15:24:00.322: INFO: Waiting up to 5m0s for pod "downwardapi-volume-893097f5-814c-4854-ad17-342f19b9098e" in namespace "downward-api-3665" to be "success or failure" Jan 4 15:24:00.342: INFO: Pod "downwardapi-volume-893097f5-814c-4854-ad17-342f19b9098e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.899957ms Jan 4 15:24:02.346: INFO: Pod "downwardapi-volume-893097f5-814c-4854-ad17-342f19b9098e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02313695s Jan 4 15:24:04.352: INFO: Pod "downwardapi-volume-893097f5-814c-4854-ad17-342f19b9098e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02970406s Jan 4 15:24:06.507: INFO: Pod "downwardapi-volume-893097f5-814c-4854-ad17-342f19b9098e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.184388226s Jan 4 15:24:08.514: INFO: Pod "downwardapi-volume-893097f5-814c-4854-ad17-342f19b9098e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.191263662s STEP: Saw pod success Jan 4 15:24:08.514: INFO: Pod "downwardapi-volume-893097f5-814c-4854-ad17-342f19b9098e" satisfied condition "success or failure" Jan 4 15:24:08.519: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-893097f5-814c-4854-ad17-342f19b9098e container client-container: STEP: delete the pod Jan 4 15:24:08.680: INFO: Waiting for pod downwardapi-volume-893097f5-814c-4854-ad17-342f19b9098e to disappear Jan 4 15:24:08.699: INFO: Pod downwardapi-volume-893097f5-814c-4854-ad17-342f19b9098e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:24:08.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3665" for this suite. • [SLOW TEST:8.526 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3654,"failed":0} [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:24:08.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7508 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 4 15:24:08.849: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 4 15:24:46.993: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-7508 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 15:24:46.994: INFO: >>> kubeConfig: /root/.kube/config I0104 15:24:47.032486 9 log.go:172] (0xc002828f20) (0xc000f32fa0) Create stream I0104 15:24:47.032570 9 log.go:172] (0xc002828f20) (0xc000f32fa0) Stream added, broadcasting: 1 I0104 15:24:47.039800 9 log.go:172] (0xc002828f20) Reply frame received for 1 I0104 15:24:47.039850 9 log.go:172] (0xc002828f20) (0xc0017d0000) Create stream I0104 15:24:47.039862 9 log.go:172] (0xc002828f20) (0xc0017d0000) Stream added, broadcasting: 3 I0104 15:24:47.040890 9 log.go:172] (0xc002828f20) Reply frame received for 3 I0104 15:24:47.040910 9 log.go:172] (0xc002828f20) (0xc0013c21e0) Create stream I0104 15:24:47.040918 9 log.go:172] (0xc002828f20) (0xc0013c21e0) Stream added, broadcasting: 5 I0104 15:24:47.042158 9 log.go:172] (0xc002828f20) Reply frame received for 5 I0104 15:24:47.135825 9 log.go:172] (0xc002828f20) Data frame received for 3 I0104 15:24:47.135896 9 log.go:172] (0xc0017d0000) (3) Data frame handling I0104 15:24:47.135924 9 log.go:172] (0xc0017d0000) (3) Data frame sent I0104 15:24:47.202896 9 log.go:172] (0xc002828f20) Data frame received for 1 I0104 15:24:47.202932 9 log.go:172] (0xc000f32fa0) (1) Data frame handling I0104 15:24:47.202972 9 log.go:172] (0xc000f32fa0) (1) Data frame sent I0104 15:24:47.204202 9 log.go:172] (0xc002828f20) (0xc0013c21e0) Stream removed, broadcasting: 5 I0104 15:24:47.204405 9 log.go:172] (0xc002828f20) (0xc0017d0000) Stream removed, broadcasting: 3 I0104 15:24:47.204450 9 log.go:172] (0xc002828f20) (0xc000f32fa0) Stream removed, broadcasting: 1 I0104 15:24:47.204571 9 log.go:172] (0xc002828f20) (0xc000f32fa0) Stream removed, broadcasting: 1 I0104 15:24:47.204599 9 log.go:172] (0xc002828f20) (0xc0017d0000) Stream removed, broadcasting: 3 I0104 15:24:47.204631 9 log.go:172] (0xc002828f20) (0xc0013c21e0) Stream removed, broadcasting: 5 Jan 4 15:24:47.204: INFO: Waiting for responses: map[] I0104 15:24:47.205250 9 log.go:172] (0xc002828f20) Go away received Jan 4 15:24:47.208: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-7508 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 15:24:47.208: INFO: >>> kubeConfig: /root/.kube/config I0104 15:24:47.242785 9 log.go:172] (0xc002597e40) (0xc0013c3ae0) Create stream I0104 15:24:47.242909 9 log.go:172] (0xc002597e40) (0xc0013c3ae0) Stream added, broadcasting: 1 I0104 15:24:47.248002 9 log.go:172] (0xc002597e40) Reply frame received for 1 I0104 15:24:47.248090 9 log.go:172] (0xc002597e40) (0xc000f33400) Create stream I0104 15:24:47.248101 9 log.go:172] (0xc002597e40) (0xc000f33400) Stream added, broadcasting: 3 I0104 15:24:47.249361 9 log.go:172] (0xc002597e40) Reply frame received for 3 I0104 15:24:47.249383 9 log.go:172] (0xc002597e40) (0xc000f33540) Create stream I0104 15:24:47.249392 9 log.go:172] (0xc002597e40) (0xc000f33540) Stream added, broadcasting: 5 I0104 15:24:47.250702 9 log.go:172] (0xc002597e40) Reply frame received for 5 I0104 15:24:47.327854 9 log.go:172] (0xc002597e40) Data frame received for 3 I0104 15:24:47.327890 9 log.go:172] (0xc000f33400) (3) Data frame handling I0104 15:24:47.327912 9 log.go:172] (0xc000f33400) (3) Data frame sent I0104 15:24:47.415646 9 log.go:172] (0xc002597e40) Data frame received for 1 I0104 15:24:47.415741 9 log.go:172] (0xc002597e40) (0xc000f33400) Stream removed, broadcasting: 3 I0104 15:24:47.415788 9 log.go:172] (0xc0013c3ae0) (1) Data frame handling I0104 15:24:47.415808 9 log.go:172] (0xc0013c3ae0) (1) Data frame sent I0104 15:24:47.415822 9 log.go:172] (0xc002597e40) (0xc000f33540) Stream removed, broadcasting: 5 I0104 15:24:47.415843 9 log.go:172] (0xc002597e40) (0xc0013c3ae0) Stream removed, broadcasting: 1 I0104 15:24:47.415866 9 log.go:172] (0xc002597e40) Go away received I0104 15:24:47.415955 9 log.go:172] (0xc002597e40) (0xc0013c3ae0) Stream removed, broadcasting: 1 I0104 15:24:47.416097 9 log.go:172] (0xc002597e40) (0xc000f33400) Stream removed, broadcasting: 3 I0104 15:24:47.416115 9 log.go:172] (0xc002597e40) (0xc000f33540) Stream removed, broadcasting: 5 Jan 4 15:24:47.416: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:24:47.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7508" for this suite. • [SLOW TEST:39.356 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3654,"failed":0} [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:24:48.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 4 15:25:03.590: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:25:03.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-972" for this suite. • [SLOW TEST:15.681 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3654,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:25:03.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-051e201a-354c-4a13-9d67-4f70d65c5371 STEP: Creating a pod to test consume configMaps Jan 4 15:25:03.907: INFO: Waiting up to 5m0s for pod "pod-configmaps-9b6c375f-1142-4c53-b641-f1ad25d14957" in namespace "configmap-2275" to be "success or failure" Jan 4 15:25:03.972: INFO: Pod "pod-configmaps-9b6c375f-1142-4c53-b641-f1ad25d14957": Phase="Pending", Reason="", readiness=false. Elapsed: 64.859402ms Jan 4 15:25:05.977: INFO: Pod "pod-configmaps-9b6c375f-1142-4c53-b641-f1ad25d14957": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069687644s Jan 4 15:25:07.982: INFO: Pod "pod-configmaps-9b6c375f-1142-4c53-b641-f1ad25d14957": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074582259s Jan 4 15:25:09.987: INFO: Pod "pod-configmaps-9b6c375f-1142-4c53-b641-f1ad25d14957": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079795369s Jan 4 15:25:11.993: INFO: Pod "pod-configmaps-9b6c375f-1142-4c53-b641-f1ad25d14957": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085587685s STEP: Saw pod success Jan 4 15:25:11.993: INFO: Pod "pod-configmaps-9b6c375f-1142-4c53-b641-f1ad25d14957" satisfied condition "success or failure" Jan 4 15:25:11.997: INFO: Trying to get logs from node jerma-node pod pod-configmaps-9b6c375f-1142-4c53-b641-f1ad25d14957 container configmap-volume-test: STEP: delete the pod Jan 4 15:25:12.164: INFO: Waiting for pod pod-configmaps-9b6c375f-1142-4c53-b641-f1ad25d14957 to disappear Jan 4 15:25:12.225: INFO: Pod pod-configmaps-9b6c375f-1142-4c53-b641-f1ad25d14957 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:25:12.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2275" for this suite. • [SLOW TEST:8.497 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3685,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:25:12.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 15:25:12.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-71' Jan 4 15:25:15.341: INFO: stderr: "" Jan 4 15:25:15.342: INFO: stdout: "replicationcontroller/agnhost-master created\n" Jan 4 15:25:15.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-71' Jan 4 15:25:15.806: INFO: stderr: "" Jan 4 15:25:15.806: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 4 15:25:16.823: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 15:25:16.823: INFO: Found 0 / 1 Jan 4 15:25:17.813: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 15:25:17.813: INFO: Found 0 / 1 Jan 4 15:25:18.812: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 15:25:18.812: INFO: Found 0 / 1 Jan 4 15:25:19.810: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 15:25:19.810: INFO: Found 0 / 1 Jan 4 15:25:20.813: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 15:25:20.813: INFO: Found 0 / 1 Jan 4 15:25:21.813: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 15:25:21.813: INFO: Found 1 / 1 Jan 4 15:25:21.813: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 4 15:25:21.819: INFO: Selector matched 1 pods for map[app:agnhost] Jan 4 15:25:21.819: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 4 15:25:21.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-kx9tv --namespace=kubectl-71' Jan 4 15:25:21.983: INFO: stderr: "" Jan 4 15:25:21.983: INFO: stdout: "Name: agnhost-master-kx9tv\nNamespace: kubectl-71\nPriority: 0\nNode: jerma-node/10.96.2.250\nStart Time: Sat, 04 Jan 2020 15:25:15 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nIPs:\n IP: 10.44.0.1\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: docker://52fc1c656f19f35d39ef626fcb8de87acb4f3f75ac1ed7fab3596f33179470cb\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 04 Jan 2020 15:25:20 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-bl5vv (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-bl5vv:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-bl5vv\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-71/agnhost-master-kx9tv to jerma-node\n Normal Pulled 3s kubelet, jerma-node Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-node Created container agnhost-master\n Normal Started 1s kubelet, jerma-node Started container agnhost-master\n" Jan 4 15:25:21.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-71' Jan 4 15:25:22.093: INFO: stderr: "" Jan 4 15:25:22.093: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-71\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: agnhost-master-kx9tv\n" Jan 4 15:25:22.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-71' Jan 4 15:25:22.216: INFO: stderr: "" Jan 4 15:25:22.216: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-71\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.118.180\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Jan 4 15:25:22.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node' Jan 4 15:25:22.360: INFO: stderr: "" Jan 4 15:25:22.360: INFO: stdout: "Name: jerma-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 04 Jan 2020 11:59:52 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: jerma-node\n AcquireTime: \n RenewTime: Sat, 04 Jan 2020 15:25:13 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 04 Jan 2020 12:00:49 +0000 Sat, 04 Jan 2020 12:00:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Sat, 04 Jan 2020 15:22:08 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 04 Jan 2020 15:22:08 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 04 Jan 2020 15:22:08 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 04 Jan 2020 15:22:08 +0000 Sat, 04 Jan 2020 12:00:52 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.2.250\n Hostname: jerma-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: bdc16344252549dd902c3a5d68b22f41\n System UUID: BDC16344-2525-49DD-902C-3A5D68B22F41\n Boot ID: eec61fc4-8bf6-487f-8f93-ea9731fe757a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-dsf66 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h25m\n kube-system weave-net-kz8lv 20m (0%) 0 (0%) 0 (0%) 0 (0%) 3h25m\n kubectl-71 agnhost-master-kx9tv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jan 4 15:25:22.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-71' Jan 4 15:25:22.448: INFO: stderr: "" Jan 4 15:25:22.448: INFO: stdout: "Name: kubectl-71\nLabels: e2e-framework=kubectl\n e2e-run=b8c344be-d34b-4d1b-befa-001e925c83f6\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:25:22.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-71" for this suite. • [SLOW TEST:10.209 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":237,"skipped":3687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:25:22.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-3235 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3235 STEP: Deleting pre-stop pod Jan 4 15:25:47.861: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:25:47.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3235" for this suite. • [SLOW TEST:25.452 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":238,"skipped":3729,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:25:47.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0104 15:26:29.896857 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 4 15:26:29.896: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:26:29.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6228" for this suite. • [SLOW TEST:42.004 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":239,"skipped":3760,"failed":0} SSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:26:29.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-9605 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9605 to expose endpoints map[] Jan 4 15:26:30.116: INFO: Get endpoints failed (4.636113ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 4 15:26:31.123: INFO: successfully validated that service multi-endpoint-test in namespace services-9605 exposes endpoints map[] (1.011178809s elapsed) STEP: Creating pod pod1 in namespace services-9605 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9605 to expose endpoints map[pod1:[100]] Jan 4 15:26:35.336: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.20041692s elapsed, will retry) Jan 4 15:26:44.648: INFO: successfully validated that service multi-endpoint-test in namespace services-9605 exposes endpoints map[pod1:[100]] (13.512335706s elapsed) STEP: Creating pod pod2 in namespace services-9605 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9605 to expose endpoints map[pod1:[100] pod2:[101]] Jan 4 15:26:53.129: INFO: Unexpected endpoints: found map[378de519-1bdc-4dbf-83b2-244339d450f2:[100]], expected map[pod1:[100] pod2:[101]] (8.275098972s elapsed, will retry) Jan 4 15:26:57.176: INFO: successfully validated that service multi-endpoint-test in namespace services-9605 exposes endpoints map[pod1:[100] pod2:[101]] (12.321681761s elapsed) STEP: Deleting pod pod1 in namespace services-9605 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9605 to expose endpoints map[pod2:[101]] Jan 4 15:26:58.409: INFO: successfully validated that service multi-endpoint-test in namespace services-9605 exposes endpoints map[pod2:[101]] (1.222477043s elapsed) STEP: Deleting pod pod2 in namespace services-9605 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9605 to expose endpoints map[] Jan 4 15:26:59.587: INFO: successfully validated that service multi-endpoint-test in namespace services-9605 exposes endpoints map[] (1.173029376s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:26:59.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9605" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:30.208 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":240,"skipped":3766,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:27:00.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0104 15:27:32.200081 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 4 15:27:32.200: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:27:32.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5600" for this suite. • [SLOW TEST:32.094 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":241,"skipped":3780,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:27:32.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 4 15:27:32.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-8952' Jan 4 15:27:32.450: INFO: stderr: "" Jan 4 15:27:32.450: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jan 4 15:27:47.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-8952 -o json' Jan 4 15:27:47.745: INFO: stderr: "" Jan 4 15:27:47.745: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-01-04T15:27:32Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8952\",\n \"resourceVersion\": \"46059\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-8952/pods/e2e-test-httpd-pod\",\n \"uid\": \"a6533320-3271-40eb-92fe-fc23a21a46de\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-dttwh\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-dttwh\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-dttwh\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-04T15:27:32Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-04T15:27:42Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-04T15:27:42Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-04T15:27:32Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://1ee27071d94c11e41ce33ac20cda30b539b34670b58066bec3d1a8cbf65806eb\",\n \"image\": \"httpd:2.4.38-alpine\",\n \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-04T15:27:40Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.2.250\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.2\",\n \"podIPs\": [\n {\n \"ip\": \"10.44.0.2\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-01-04T15:27:32Z\"\n }\n}\n" STEP: replace the image in the pod Jan 4 15:27:47.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8952' Jan 4 15:27:48.163: INFO: stderr: "" Jan 4 15:27:48.163: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1882 Jan 4 15:27:48.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8952' Jan 4 15:27:59.012: INFO: stderr: "" Jan 4 15:27:59.012: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:27:59.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8952" for this suite. • [SLOW TEST:26.812 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1873 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":242,"skipped":3783,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:27:59.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:28:05.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4194" for this suite. STEP: Destroying namespace "nsdeletetest-5942" for this suite. Jan 4 15:28:05.328: INFO: Namespace nsdeletetest-5942 was already deleted STEP: Destroying namespace "nsdeletetest-0" for this suite. • [SLOW TEST:6.302 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":243,"skipped":3837,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:28:05.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Jan 4 15:28:05.439: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:28:19.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8286" for this suite. • [SLOW TEST:13.698 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":244,"skipped":3838,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:28:19.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:28:23.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8931" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":245,"skipped":3844,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:28:23.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 4 15:28:23.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5962' Jan 4 15:28:24.084: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 4 15:28:24.084: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Jan 4 15:28:24.148: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-fwl65] Jan 4 15:28:24.148: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-fwl65" in namespace "kubectl-5962" to be "running and ready" Jan 4 15:28:24.154: INFO: Pod "e2e-test-httpd-rc-fwl65": Phase="Pending", Reason="", readiness=false. Elapsed: 5.942177ms Jan 4 15:28:26.161: INFO: Pod "e2e-test-httpd-rc-fwl65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012840321s Jan 4 15:28:28.168: INFO: Pod "e2e-test-httpd-rc-fwl65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02014273s Jan 4 15:28:30.175: INFO: Pod "e2e-test-httpd-rc-fwl65": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027387068s Jan 4 15:28:32.183: INFO: Pod "e2e-test-httpd-rc-fwl65": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035172463s Jan 4 15:28:34.188: INFO: Pod "e2e-test-httpd-rc-fwl65": Phase="Running", Reason="", readiness=true. Elapsed: 10.040485442s Jan 4 15:28:34.188: INFO: Pod "e2e-test-httpd-rc-fwl65" satisfied condition "running and ready" Jan 4 15:28:34.188: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-fwl65] Jan 4 15:28:34.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-5962' Jan 4 15:28:34.379: INFO: stderr: "" Jan 4 15:28:34.379: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Sat Jan 04 15:28:32.140584 2020] [mpm_event:notice] [pid 1:tid 140634143005544] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat Jan 04 15:28:32.140657 2020] [core:notice] [pid 1:tid 140634143005544] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Jan 4 15:28:34.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5962' Jan 4 15:28:34.579: INFO: stderr: "" Jan 4 15:28:34.579: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 4 15:28:34.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5962" for this suite. • [SLOW TEST:10.793 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1608 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":246,"skipped":3845,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:28:34.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Jan 4 15:28:34.642: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Jan 4 15:28:34.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2961' Jan 4 15:28:35.145: INFO: stderr: "" Jan 4 15:28:35.145: INFO: stdout: "service/agnhost-slave created\n" Jan 4 15:28:35.145: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Jan 4 15:28:35.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2961' Jan 4 15:28:35.613: INFO: stderr: "" Jan 4 15:28:35.613: INFO: stdout: "service/agnhost-master created\n" Jan 4 15:28:35.614: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 4 15:28:35.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2961' Jan 4 15:28:36.016: INFO: stderr: "" Jan 4 15:28:36.016: INFO: stdout: "service/frontend created\n" Jan 4 15:28:36.016: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 4 15:28:36.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2961' Jan 4 15:28:36.609: INFO: stderr: "" Jan 4 15:28:36.609: INFO: stdout: "deployment.apps/frontend created\n" Jan 4 15:28:36.610: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 4 15:28:36.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2961' Jan 4 15:28:37.154: INFO: stderr: "" Jan 4 15:28:37.154: INFO: stdout: "deployment.apps/agnhost-master created\n" Jan 4 15:28:37.154: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 4 15:28:37.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2961' Jan 4 15:28:39.065: INFO: stderr: "" Jan 4 15:28:39.065: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Jan 4 15:28:39.065: INFO: Waiting for all frontend pods to be Running. Jan 4 15:29:04.116: INFO: Waiting for frontend to serve content. Jan 4 15:29:04.154: INFO: Trying to add a new entry to the guestbook. Jan 4 15:29:04.168: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:29:09.183: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:29:14.202: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:29:19.221: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:29:24.242: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:29:29.260: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:29:34.278: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:29:39.294: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:29:44.303: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:29:49.318: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:29:54.337: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:29:59.356: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:30:04.373: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:30:09.391: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:30:14.407: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:30:19.437: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:30:24.460: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:30:29.661: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:30:34.681: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:30:39.698: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:30:44.721: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:30:49.745: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:30:54.763: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:30:59.893: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:31:04.914: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:31:09.935: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:31:14.956: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:31:19.969: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:31:24.984: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:31:30.004: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:31:35.017: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:31:40.027: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:31:45.047: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:31:50.110: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:31:55.120: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:32:00.137: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 15:32:05.137: FAIL: Cannot added new entry in 180 seconds. Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.validateGuestbookApp(0x5424e60, 0xc004210dc0, 0xc002abce70, 0xc) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315 +0x551 k8s.io/kubernetes/test/e2e/kubectl.glob..func2.7.2() /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:417 +0x165 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002b36300) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a k8s.io/kubernetes/test/e2e.TestE2E(0xc002b36300) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b testing.tRunner(0xc002b36300, 0x4c30de8) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 STEP: using delete to clean up resources Jan 4 15:32:05.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2961' Jan 4 15:32:05.358: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 15:32:05.358: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Jan 4 15:32:05.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2961' Jan 4 15:32:05.658: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 15:32:05.658: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jan 4 15:32:05.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2961' Jan 4 15:32:05.811: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 15:32:05.811: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 4 15:32:05.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2961' Jan 4 15:32:05.926: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 15:32:05.926: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 4 15:32:05.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2961' Jan 4 15:32:06.041: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 15:32:06.042: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jan 4 15:32:06.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2961' Jan 4 15:32:06.162: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 15:32:06.162: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 STEP: Collecting events from namespace "kubectl-2961". STEP: Found 33 events. Jan 4 15:32:06.175: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-master-74c46fb7d4-bpd8g: {default-scheduler } Scheduled: Successfully assigned kubectl-2961/agnhost-master-74c46fb7d4-bpd8g to jerma-node Jan 4 15:32:06.175: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-bk9d7: {default-scheduler } Scheduled: Successfully assigned kubectl-2961/agnhost-slave-774cfc759f-bk9d7 to jerma-node Jan 4 15:32:06.175: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-dfk7f: {default-scheduler } Scheduled: Successfully assigned kubectl-2961/agnhost-slave-774cfc759f-dfk7f to jerma-server-mvvl6gufaqub Jan 4 15:32:06.175: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-28tjg: {default-scheduler } Scheduled: Successfully assigned kubectl-2961/frontend-6c5f89d5d4-28tjg to jerma-node Jan 4 15:32:06.175: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-522gn: {default-scheduler } Scheduled: Successfully assigned kubectl-2961/frontend-6c5f89d5d4-522gn to jerma-node Jan 4 15:32:06.175: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-hnhgs: {default-scheduler } Scheduled: Successfully assigned kubectl-2961/frontend-6c5f89d5d4-hnhgs to jerma-server-mvvl6gufaqub Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:36 +0000 UTC - event for frontend: {deployment-controller } ScalingReplicaSet: Scaled up replica set frontend-6c5f89d5d4 to 3 Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:36 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-hnhgs Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:36 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-522gn Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:36 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-28tjg Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:37 +0000 UTC - event for agnhost-master: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-master-74c46fb7d4 to 1 Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:37 +0000 UTC - event for agnhost-master-74c46fb7d4: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-master-74c46fb7d4-bpd8g Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:39 +0000 UTC - event for agnhost-slave: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-slave-774cfc759f to 2 Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:39 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-dfk7f Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:39 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-bk9d7 Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:45 +0000 UTC - event for frontend-6c5f89d5d4-hnhgs: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:46 +0000 UTC - event for frontend-6c5f89d5d4-28tjg: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:48 +0000 UTC - event for agnhost-slave-774cfc759f-dfk7f: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:50 +0000 UTC - event for frontend-6c5f89d5d4-hnhgs: {kubelet jerma-server-mvvl6gufaqub} Created: Created container guestbook-frontend Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:51 +0000 UTC - event for agnhost-slave-774cfc759f-dfk7f: {kubelet jerma-server-mvvl6gufaqub} Created: Created container slave Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:51 +0000 UTC - event for frontend-6c5f89d5d4-522gn: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:51 +0000 UTC - event for frontend-6c5f89d5d4-hnhgs: {kubelet jerma-server-mvvl6gufaqub} Started: Started container guestbook-frontend Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:52 +0000 UTC - event for agnhost-master-74c46fb7d4-bpd8g: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:52 +0000 UTC - event for agnhost-slave-774cfc759f-dfk7f: {kubelet jerma-server-mvvl6gufaqub} Started: Started container slave Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:52 +0000 UTC - event for frontend-6c5f89d5d4-28tjg: {kubelet jerma-node} Created: Created container guestbook-frontend Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:55 +0000 UTC - event for agnhost-slave-774cfc759f-bk9d7: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:58 +0000 UTC - event for agnhost-master-74c46fb7d4-bpd8g: {kubelet jerma-node} Created: Created container master Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:58 +0000 UTC - event for agnhost-slave-774cfc759f-bk9d7: {kubelet jerma-node} Created: Created container slave Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:58 +0000 UTC - event for frontend-6c5f89d5d4-28tjg: {kubelet jerma-node} Started: Started container guestbook-frontend Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:58 +0000 UTC - event for frontend-6c5f89d5d4-522gn: {kubelet jerma-node} Created: Created container guestbook-frontend Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:59 +0000 UTC - event for agnhost-master-74c46fb7d4-bpd8g: {kubelet jerma-node} Started: Started container master Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:59 +0000 UTC - event for agnhost-slave-774cfc759f-bk9d7: {kubelet jerma-node} Started: Started container slave Jan 4 15:32:06.175: INFO: At 2020-01-04 15:28:59 +0000 UTC - event for frontend-6c5f89d5d4-522gn: {kubelet jerma-node} Started: Started container guestbook-frontend Jan 4 15:32:06.180: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 15:32:06.180: INFO: agnhost-master-74c46fb7d4-bpd8g jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:37 +0000 UTC }] Jan 4 15:32:06.180: INFO: agnhost-slave-774cfc759f-bk9d7 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:29:00 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:29:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:39 +0000 UTC }] Jan 4 15:32:06.180: INFO: agnhost-slave-774cfc759f-dfk7f jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:39 +0000 UTC }] Jan 4 15:32:06.180: INFO: frontend-6c5f89d5d4-28tjg jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:36 +0000 UTC }] Jan 4 15:32:06.180: INFO: frontend-6c5f89d5d4-522gn jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:36 +0000 UTC }] Jan 4 15:32:06.180: INFO: frontend-6c5f89d5d4-hnhgs jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:28:36 +0000 UTC }] Jan 4 15:32:06.180: INFO: Jan 4 15:32:06.215: INFO: Logging node info for node jerma-node Jan 4 15:32:06.222: INFO: Node Info: &Node{ObjectMeta:{jerma-node /api/v1/nodes/jerma-node 6236bfb4-6b64-4c0a-82c6-f768ceeab07c 45939 0 2020-01-04 11:59:52 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 12:00:49 +0000 UTC,LastTransitionTime:2020-01-04 12:00:49 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-04 15:27:09 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-04 15:27:09 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-04 15:27:09 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-04 15:27:09 +0000 UTC,LastTransitionTime:2020-01-04 12:00:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.250,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bdc16344252549dd902c3a5d68b22f41,SystemUUID:BDC16344-2525-49DD-902C-3A5D68B22F41,BootID:eec61fc4-8bf6-487f-8f93-ea9731fe757a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 4 15:32:06.223: INFO: Logging kubelet events for node jerma-node Jan 4 15:32:06.242: INFO: Logging pods the kubelet thinks is on node jerma-node Jan 4 15:32:06.440: INFO: agnhost-slave-774cfc759f-bk9d7 started at 2020-01-04 15:28:41 +0000 UTC (0+1 container statuses recorded) Jan 4 15:32:06.440: INFO: Container slave ready: true, restart count 0 Jan 4 15:32:06.440: INFO: kube-proxy-dsf66 started at 2020-01-04 11:59:52 +0000 UTC (0+1 container statuses recorded) Jan 4 15:32:06.440: INFO: Container kube-proxy ready: true, restart count 0 Jan 4 15:32:06.440: INFO: weave-net-kz8lv started at 2020-01-04 11:59:52 +0000 UTC (0+2 container statuses recorded) Jan 4 15:32:06.440: INFO: Container weave ready: true, restart count 1 Jan 4 15:32:06.440: INFO: Container weave-npc ready: true, restart count 0 Jan 4 15:32:06.440: INFO: frontend-6c5f89d5d4-522gn started at 2020-01-04 15:28:38 +0000 UTC (0+1 container statuses recorded) Jan 4 15:32:06.440: INFO: Container guestbook-frontend ready: true, restart count 0 Jan 4 15:32:06.440: INFO: frontend-6c5f89d5d4-28tjg started at 2020-01-04 15:28:38 +0000 UTC (0+1 container statuses recorded) Jan 4 15:32:06.440: INFO: Container guestbook-frontend ready: true, restart count 0 Jan 4 15:32:06.440: INFO: agnhost-master-74c46fb7d4-bpd8g started at 2020-01-04 15:28:40 +0000 UTC (0+1 container statuses recorded) Jan 4 15:32:06.440: INFO: Container master ready: true, restart count 0 W0104 15:32:06.453278 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 4 15:32:06.643: INFO: Latency metrics for node jerma-node Jan 4 15:32:06.644: INFO: Logging node info for node jerma-server-mvvl6gufaqub Jan 4 15:32:06.705: INFO: Node Info: &Node{ObjectMeta:{jerma-server-mvvl6gufaqub /api/v1/nodes/jerma-server-mvvl6gufaqub a2a7fe9b-7d59-43f1-bbe3-2a69f99cabd2 46937 0 2020-01-04 11:47:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-mvvl6gufaqub kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 11:48:36 +0000 UTC,LastTransitionTime:2020-01-04 11:48:36 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-04 15:31:41 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-04 15:31:41 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-04 15:31:41 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-04 15:31:41 +0000 UTC,LastTransitionTime:2020-01-04 11:48:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.1.234,},NodeAddress{Type:Hostname,Address:jerma-server-mvvl6gufaqub,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f0346566ad342efb0c9f55677d0a8ea,SystemUUID:3F034656-6AD3-42EF-B0C9-F55677D0A8EA,BootID:87dae5d0-e99d-4d31-a4e7-fbd07d84e951,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662 k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:170957331,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0438efb5098a2ca634ea8c6b0d804742b733d0d13fd53cf62c73e32c659a3c39 k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:160877075,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:5215c4216a65f7e76c1895ba951a12dc1c947904a91810fc66a544ff1d7e87db k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:94431763,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 4 15:32:06.706: INFO: Logging kubelet events for node jerma-server-mvvl6gufaqub Jan 4 15:32:06.743: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub Jan 4 15:32:06.809: INFO: coredns-6955765f44-bhnn4 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded) Jan 4 15:32:06.809: INFO: Container coredns ready: true, restart count 0 Jan 4 15:32:06.809: INFO: coredns-6955765f44-bwd85 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded) Jan 4 15:32:06.809: INFO: Container coredns ready: true, restart count 0 Jan 4 15:32:06.809: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded) Jan 4 15:32:06.809: INFO: Container kube-controller-manager ready: true, restart count 1 Jan 4 15:32:06.809: INFO: kube-proxy-chkps started at 2020-01-04 11:48:11 +0000 UTC (0+1 container statuses recorded) Jan 4 15:32:06.809: INFO: Container kube-proxy ready: true, restart count 0 Jan 4 15:32:06.809: INFO: weave-net-z6tjf started at 2020-01-04 11:48:11 +0000 UTC (0+2 container statuses recorded) Jan 4 15:32:06.809: INFO: Container weave ready: true, restart count 0 Jan 4 15:32:06.809: INFO: Container weave-npc ready: true, restart count 0 Jan 4 15:32:06.809: INFO: agnhost-slave-774cfc759f-dfk7f started at 2020-01-04 15:28:40 +0000 UTC (0+1 container statuses recorded) Jan 4 15:32:06.809: INFO: Container slave ready: true, restart count 0 Jan 4 15:32:06.809: INFO: kube-scheduler-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded) Jan 4 15:32:06.809: INFO: Container kube-scheduler ready: true, restart count 2 Jan 4 15:32:06.809: INFO: frontend-6c5f89d5d4-hnhgs started at 2020-01-04 15:28:36 +0000 UTC (0+1 container statuses recorded) Jan 4 15:32:06.810: INFO: Container guestbook-frontend ready: true, restart count 0 Jan 4 15:32:06.810: INFO: kube-apiserver-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded) Jan 4 15:32:06.810: INFO: Container kube-apiserver ready: true, restart count 1 Jan 4 15:32:06.810: INFO: etcd-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded) Jan 4 15:32:06.810: INFO: Container etcd ready: true, restart count 1 W0104 15:32:08.106411 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 4 15:32:08.137: INFO: Latency metrics for node jerma-server-mvvl6gufaqub Jan 4 15:32:08.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2961" for this suite. • Failure [213.751 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:385 should create and stop a working application [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 15:32:05.137: Cannot added new entry in 180 seconds. /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315 ------------------------------ {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":246,"skipped":3868,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 4 15:32:08.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 4 15:32:08.874: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/:
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan  4 15:32:09.576: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  4 15:32:09.653: INFO: Waiting for terminating namespaces to be deleted...
Jan  4 15:32:09.655: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan  4 15:32:09.662: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan  4 15:32:09.662: INFO: 	Container weave ready: true, restart count 1
Jan  4 15:32:09.662: INFO: 	Container weave-npc ready: true, restart count 0
Jan  4 15:32:09.662: INFO: frontend-6c5f89d5d4-522gn from kubectl-2961 started at 2020-01-04 15:28:38 +0000 UTC (1 container statuses recorded)
Jan  4 15:32:09.662: INFO: 	Container guestbook-frontend ready: true, restart count 0
Jan  4 15:32:09.662: INFO: frontend-6c5f89d5d4-28tjg from kubectl-2961 started at 2020-01-04 15:28:38 +0000 UTC (1 container statuses recorded)
Jan  4 15:32:09.662: INFO: 	Container guestbook-frontend ready: true, restart count 0
Jan  4 15:32:09.662: INFO: agnhost-master-74c46fb7d4-bpd8g from kubectl-2961 started at 2020-01-04 15:28:40 +0000 UTC (1 container statuses recorded)
Jan  4 15:32:09.662: INFO: 	Container master ready: true, restart count 0
Jan  4 15:32:09.662: INFO: agnhost-slave-774cfc759f-bk9d7 from kubectl-2961 started at 2020-01-04 15:28:41 +0000 UTC (1 container statuses recorded)
Jan  4 15:32:09.662: INFO: 	Container slave ready: true, restart count 0
Jan  4 15:32:09.662: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan  4 15:32:09.662: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  4 15:32:09.662: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan  4 15:32:09.668: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan  4 15:32:09.668: INFO: 	Container kube-scheduler ready: true, restart count 2
Jan  4 15:32:09.668: INFO: frontend-6c5f89d5d4-hnhgs from kubectl-2961 started at 2020-01-04 15:28:36 +0000 UTC (1 container statuses recorded)
Jan  4 15:32:09.668: INFO: 	Container guestbook-frontend ready: true, restart count 0
Jan  4 15:32:09.668: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan  4 15:32:09.668: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan  4 15:32:09.668: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan  4 15:32:09.668: INFO: 	Container etcd ready: true, restart count 1
Jan  4 15:32:09.668: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan  4 15:32:09.668: INFO: 	Container coredns ready: true, restart count 0
Jan  4 15:32:09.668: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan  4 15:32:09.668: INFO: 	Container coredns ready: true, restart count 0
Jan  4 15:32:09.668: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan  4 15:32:09.668: INFO: 	Container kube-controller-manager ready: true, restart count 1
Jan  4 15:32:09.668: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan  4 15:32:09.668: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  4 15:32:09.668: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan  4 15:32:09.668: INFO: 	Container weave ready: true, restart count 0
Jan  4 15:32:09.668: INFO: 	Container weave-npc ready: true, restart count 0
Jan  4 15:32:09.668: INFO: agnhost-slave-774cfc759f-dfk7f from kubectl-2961 started at 2020-01-04 15:28:40 +0000 UTC (1 container statuses recorded)
Jan  4 15:32:09.668: INFO: 	Container slave ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-af14c8d7-46cf-4fb7-96c1-cc5c24e6efb5 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-af14c8d7-46cf-4fb7-96c1-cc5c24e6efb5 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-af14c8d7-46cf-4fb7-96c1-cc5c24e6efb5
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:37:36.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5436" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:327.684 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":248,"skipped":3887,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:37:36.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-4100
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-4100
I0104 15:37:37.281897       9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4100, replica count: 2
I0104 15:37:40.332404       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 15:37:43.332684       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 15:37:46.332998       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 15:37:49.333235       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 15:37:52.333528       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  4 15:37:52.333: INFO: Creating new exec pod
Jan  4 15:38:03.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4100 execpodlkmqw -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan  4 15:38:06.134: INFO: stderr: "I0104 15:38:05.974049    4282 log.go:172] (0xc0000f4fd0) (0xc0006b7d60) Create stream\nI0104 15:38:05.974141    4282 log.go:172] (0xc0000f4fd0) (0xc0006b7d60) Stream added, broadcasting: 1\nI0104 15:38:05.977024    4282 log.go:172] (0xc0000f4fd0) Reply frame received for 1\nI0104 15:38:05.977044    4282 log.go:172] (0xc0000f4fd0) (0xc000664640) Create stream\nI0104 15:38:05.977050    4282 log.go:172] (0xc0000f4fd0) (0xc000664640) Stream added, broadcasting: 3\nI0104 15:38:05.977754    4282 log.go:172] (0xc0000f4fd0) Reply frame received for 3\nI0104 15:38:05.977779    4282 log.go:172] (0xc0000f4fd0) (0xc000313400) Create stream\nI0104 15:38:05.977788    4282 log.go:172] (0xc0000f4fd0) (0xc000313400) Stream added, broadcasting: 5\nI0104 15:38:05.978909    4282 log.go:172] (0xc0000f4fd0) Reply frame received for 5\nI0104 15:38:06.061437    4282 log.go:172] (0xc0000f4fd0) Data frame received for 5\nI0104 15:38:06.061482    4282 log.go:172] (0xc000313400) (5) Data frame handling\nI0104 15:38:06.061498    4282 log.go:172] (0xc000313400) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0104 15:38:06.063564    4282 log.go:172] (0xc0000f4fd0) Data frame received for 5\nI0104 15:38:06.063575    4282 log.go:172] (0xc000313400) (5) Data frame handling\nI0104 15:38:06.063581    4282 log.go:172] (0xc000313400) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0104 15:38:06.125848    4282 log.go:172] (0xc0000f4fd0) (0xc000313400) Stream removed, broadcasting: 5\nI0104 15:38:06.125986    4282 log.go:172] (0xc0000f4fd0) Data frame received for 1\nI0104 15:38:06.126042    4282 log.go:172] (0xc0000f4fd0) (0xc000664640) Stream removed, broadcasting: 3\nI0104 15:38:06.126082    4282 log.go:172] (0xc0006b7d60) (1) Data frame handling\nI0104 15:38:06.126095    4282 log.go:172] (0xc0006b7d60) (1) Data frame sent\nI0104 15:38:06.126104    4282 log.go:172] (0xc0000f4fd0) (0xc0006b7d60) Stream removed, broadcasting: 1\nI0104 15:38:06.126476    4282 log.go:172] (0xc0000f4fd0) (0xc0006b7d60) Stream removed, broadcasting: 1\nI0104 15:38:06.126489    4282 log.go:172] (0xc0000f4fd0) (0xc000664640) Stream removed, broadcasting: 3\nI0104 15:38:06.126495    4282 log.go:172] (0xc0000f4fd0) (0xc000313400) Stream removed, broadcasting: 5\n"
Jan  4 15:38:06.135: INFO: stdout: ""
Jan  4 15:38:06.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4100 execpodlkmqw -- /bin/sh -x -c nc -zv -t -w 2 10.96.133.200 80'
Jan  4 15:38:06.396: INFO: stderr: "I0104 15:38:06.241303    4304 log.go:172] (0xc000a9edc0) (0xc000bdc280) Create stream\nI0104 15:38:06.241353    4304 log.go:172] (0xc000a9edc0) (0xc000bdc280) Stream added, broadcasting: 1\nI0104 15:38:06.246441    4304 log.go:172] (0xc000a9edc0) Reply frame received for 1\nI0104 15:38:06.246467    4304 log.go:172] (0xc000a9edc0) (0xc0004c88c0) Create stream\nI0104 15:38:06.246474    4304 log.go:172] (0xc000a9edc0) (0xc0004c88c0) Stream added, broadcasting: 3\nI0104 15:38:06.247581    4304 log.go:172] (0xc000a9edc0) Reply frame received for 3\nI0104 15:38:06.247600    4304 log.go:172] (0xc000a9edc0) (0xc000757680) Create stream\nI0104 15:38:06.247610    4304 log.go:172] (0xc000a9edc0) (0xc000757680) Stream added, broadcasting: 5\nI0104 15:38:06.249218    4304 log.go:172] (0xc000a9edc0) Reply frame received for 5\nI0104 15:38:06.325364    4304 log.go:172] (0xc000a9edc0) Data frame received for 5\nI0104 15:38:06.325449    4304 log.go:172] (0xc000757680) (5) Data frame handling\nI0104 15:38:06.325463    4304 log.go:172] (0xc000757680) (5) Data frame sent\n+ I0104 15:38:06.325587    4304 log.go:172] (0xc000a9edc0) Data frame received for 5\nI0104 15:38:06.325629    4304 log.go:172] (0xc000757680) (5) Data frame handling\nI0104 15:38:06.325647    4304 log.go:172] (0xc000757680) (5) Data frame sent\nnc -zvI0104 15:38:06.325771    4304 log.go:172] (0xc000a9edc0) Data frame received for 5\nI0104 15:38:06.325780    4304 log.go:172] (0xc000757680) (5) Data frame handling\nI0104 15:38:06.325789    4304 log.go:172] (0xc000757680) (5) Data frame sent\n -tI0104 15:38:06.325913    4304 log.go:172] (0xc000a9edc0) Data frame received for 5\nI0104 15:38:06.325936    4304 log.go:172] (0xc000757680) (5) Data frame handling\nI0104 15:38:06.325947    4304 log.go:172] (0xc000757680) (5) Data frame sent\n -w 2I0104 15:38:06.326115    4304 log.go:172] (0xc000a9edc0) Data frame received for 5\nI0104 15:38:06.326149    4304 log.go:172] (0xc000757680) (5) Data frame handling\nI0104 15:38:06.326185    4304 log.go:172] (0xc000757680) (5) Data frame sent\n 10.96.133.200I0104 15:38:06.326415    4304 log.go:172] (0xc000a9edc0) Data frame received for 5\nI0104 15:38:06.326439    4304 log.go:172] (0xc000757680) (5) Data frame handling\nI0104 15:38:06.326450    4304 log.go:172] (0xc000757680) (5) Data frame sent\n 80\nI0104 15:38:06.332857    4304 log.go:172] (0xc000a9edc0) Data frame received for 5\nI0104 15:38:06.332956    4304 log.go:172] (0xc000757680) (5) Data frame handling\nI0104 15:38:06.332974    4304 log.go:172] (0xc000757680) (5) Data frame sent\nConnection to 10.96.133.200 80 port [tcp/http] succeeded!\nI0104 15:38:06.391979    4304 log.go:172] (0xc000a9edc0) (0xc0004c88c0) Stream removed, broadcasting: 3\nI0104 15:38:06.392063    4304 log.go:172] (0xc000a9edc0) Data frame received for 1\nI0104 15:38:06.392074    4304 log.go:172] (0xc000bdc280) (1) Data frame handling\nI0104 15:38:06.392091    4304 log.go:172] (0xc000bdc280) (1) Data frame sent\nI0104 15:38:06.392103    4304 log.go:172] (0xc000a9edc0) (0xc000bdc280) Stream removed, broadcasting: 1\nI0104 15:38:06.392399    4304 log.go:172] (0xc000a9edc0) (0xc000757680) Stream removed, broadcasting: 5\nI0104 15:38:06.392426    4304 log.go:172] (0xc000a9edc0) (0xc000bdc280) Stream removed, broadcasting: 1\nI0104 15:38:06.392434    4304 log.go:172] (0xc000a9edc0) (0xc0004c88c0) Stream removed, broadcasting: 3\nI0104 15:38:06.392442    4304 log.go:172] (0xc000a9edc0) (0xc000757680) Stream removed, broadcasting: 5\nI0104 15:38:06.392626    4304 log.go:172] (0xc000a9edc0) Go away received\n"
Jan  4 15:38:06.396: INFO: stdout: ""
Jan  4 15:38:06.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4100 execpodlkmqw -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32681'
Jan  4 15:38:06.715: INFO: stderr: "I0104 15:38:06.493134    4324 log.go:172] (0xc0003c0b00) (0xc0005cfa40) Create stream\nI0104 15:38:06.493257    4324 log.go:172] (0xc0003c0b00) (0xc0005cfa40) Stream added, broadcasting: 1\nI0104 15:38:06.496855    4324 log.go:172] (0xc0003c0b00) Reply frame received for 1\nI0104 15:38:06.496894    4324 log.go:172] (0xc0003c0b00) (0xc0005cfc20) Create stream\nI0104 15:38:06.496903    4324 log.go:172] (0xc0003c0b00) (0xc0005cfc20) Stream added, broadcasting: 3\nI0104 15:38:06.498069    4324 log.go:172] (0xc0003c0b00) Reply frame received for 3\nI0104 15:38:06.498107    4324 log.go:172] (0xc0003c0b00) (0xc000976000) Create stream\nI0104 15:38:06.498125    4324 log.go:172] (0xc0003c0b00) (0xc000976000) Stream added, broadcasting: 5\nI0104 15:38:06.499603    4324 log.go:172] (0xc0003c0b00) Reply frame received for 5\nI0104 15:38:06.602366    4324 log.go:172] (0xc0003c0b00) Data frame received for 5\nI0104 15:38:06.602438    4324 log.go:172] (0xc000976000) (5) Data frame handling\nI0104 15:38:06.602452    4324 log.go:172] (0xc000976000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 32681\nI0104 15:38:06.602667    4324 log.go:172] (0xc0003c0b00) Data frame received for 5\nI0104 15:38:06.602681    4324 log.go:172] (0xc000976000) (5) Data frame handling\nI0104 15:38:06.602695    4324 log.go:172] (0xc000976000) (5) Data frame sent\nConnection to 10.96.2.250 32681 port [tcp/32681] succeeded!\nI0104 15:38:06.707497    4324 log.go:172] (0xc0003c0b00) (0xc0005cfc20) Stream removed, broadcasting: 3\nI0104 15:38:06.707900    4324 log.go:172] (0xc0003c0b00) Data frame received for 1\nI0104 15:38:06.707957    4324 log.go:172] (0xc0005cfa40) (1) Data frame handling\nI0104 15:38:06.708000    4324 log.go:172] (0xc0005cfa40) (1) Data frame sent\nI0104 15:38:06.708052    4324 log.go:172] (0xc0003c0b00) (0xc0005cfa40) Stream removed, broadcasting: 1\nI0104 15:38:06.708548    4324 log.go:172] (0xc0003c0b00) (0xc000976000) Stream removed, broadcasting: 5\nI0104 15:38:06.708639    4324 log.go:172] (0xc0003c0b00) (0xc0005cfa40) Stream removed, broadcasting: 1\nI0104 15:38:06.708671    4324 log.go:172] (0xc0003c0b00) (0xc0005cfc20) Stream removed, broadcasting: 3\nI0104 15:38:06.708714    4324 log.go:172] (0xc0003c0b00) (0xc000976000) Stream removed, broadcasting: 5\nI0104 15:38:06.708908    4324 log.go:172] (0xc0003c0b00) Go away received\n"
Jan  4 15:38:06.715: INFO: stdout: ""
Jan  4 15:38:06.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4100 execpodlkmqw -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32681'
Jan  4 15:38:06.976: INFO: stderr: "I0104 15:38:06.832869    4339 log.go:172] (0xc0007060b0) (0xc000598780) Create stream\nI0104 15:38:06.832923    4339 log.go:172] (0xc0007060b0) (0xc000598780) Stream added, broadcasting: 1\nI0104 15:38:06.835040    4339 log.go:172] (0xc0007060b0) Reply frame received for 1\nI0104 15:38:06.835070    4339 log.go:172] (0xc0007060b0) (0xc0006d4000) Create stream\nI0104 15:38:06.835076    4339 log.go:172] (0xc0007060b0) (0xc0006d4000) Stream added, broadcasting: 3\nI0104 15:38:06.836897    4339 log.go:172] (0xc0007060b0) Reply frame received for 3\nI0104 15:38:06.836974    4339 log.go:172] (0xc0007060b0) (0xc00021c000) Create stream\nI0104 15:38:06.836982    4339 log.go:172] (0xc0007060b0) (0xc00021c000) Stream added, broadcasting: 5\nI0104 15:38:06.838121    4339 log.go:172] (0xc0007060b0) Reply frame received for 5\nI0104 15:38:06.911965    4339 log.go:172] (0xc0007060b0) Data frame received for 5\nI0104 15:38:06.912024    4339 log.go:172] (0xc00021c000) (5) Data frame handling\nI0104 15:38:06.912039    4339 log.go:172] (0xc00021c000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 32681\nI0104 15:38:06.915857    4339 log.go:172] (0xc0007060b0) Data frame received for 5\nI0104 15:38:06.915920    4339 log.go:172] (0xc00021c000) (5) Data frame handling\nI0104 15:38:06.915934    4339 log.go:172] (0xc00021c000) (5) Data frame sent\nConnection to 10.96.1.234 32681 port [tcp/32681] succeeded!\nI0104 15:38:06.973278    4339 log.go:172] (0xc0007060b0) Data frame received for 1\nI0104 15:38:06.973330    4339 log.go:172] (0xc000598780) (1) Data frame handling\nI0104 15:38:06.973347    4339 log.go:172] (0xc000598780) (1) Data frame sent\nI0104 15:38:06.973526    4339 log.go:172] (0xc0007060b0) (0xc000598780) Stream removed, broadcasting: 1\nI0104 15:38:06.973813    4339 log.go:172] (0xc0007060b0) (0xc0006d4000) Stream removed, broadcasting: 3\nI0104 15:38:06.973838    4339 log.go:172] (0xc0007060b0) (0xc00021c000) Stream removed, broadcasting: 5\nI0104 15:38:06.973855    4339 log.go:172] (0xc0007060b0) (0xc000598780) Stream removed, broadcasting: 1\nI0104 15:38:06.973863    4339 log.go:172] (0xc0007060b0) (0xc0006d4000) Stream removed, broadcasting: 3\nI0104 15:38:06.973870    4339 log.go:172] (0xc0007060b0) (0xc00021c000) Stream removed, broadcasting: 5\n"
Jan  4 15:38:06.976: INFO: stdout: ""
Jan  4 15:38:06.976: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:38:07.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4100" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:30.374 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":249,"skipped":3890,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:38:07.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  4 15:38:07.281: INFO: Waiting up to 5m0s for pod "pod-b6a36f53-5246-4827-b7cc-773684d1aa1b" in namespace "emptydir-723" to be "success or failure"
Jan  4 15:38:07.412: INFO: Pod "pod-b6a36f53-5246-4827-b7cc-773684d1aa1b": Phase="Pending", Reason="", readiness=false. Elapsed: 130.757331ms
Jan  4 15:38:09.487: INFO: Pod "pod-b6a36f53-5246-4827-b7cc-773684d1aa1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206194648s
Jan  4 15:38:11.492: INFO: Pod "pod-b6a36f53-5246-4827-b7cc-773684d1aa1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211019134s
Jan  4 15:38:13.497: INFO: Pod "pod-b6a36f53-5246-4827-b7cc-773684d1aa1b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21662828s
Jan  4 15:38:16.044: INFO: Pod "pod-b6a36f53-5246-4827-b7cc-773684d1aa1b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.762951036s
Jan  4 15:38:18.048: INFO: Pod "pod-b6a36f53-5246-4827-b7cc-773684d1aa1b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.767263178s
Jan  4 15:38:20.336: INFO: Pod "pod-b6a36f53-5246-4827-b7cc-773684d1aa1b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.054820678s
Jan  4 15:38:22.349: INFO: Pod "pod-b6a36f53-5246-4827-b7cc-773684d1aa1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.068043502s
STEP: Saw pod success
Jan  4 15:38:22.350: INFO: Pod "pod-b6a36f53-5246-4827-b7cc-773684d1aa1b" satisfied condition "success or failure"
Jan  4 15:38:22.354: INFO: Trying to get logs from node jerma-node pod pod-b6a36f53-5246-4827-b7cc-773684d1aa1b container test-container: 
STEP: delete the pod
Jan  4 15:38:22.451: INFO: Waiting for pod pod-b6a36f53-5246-4827-b7cc-773684d1aa1b to disappear
Jan  4 15:38:22.460: INFO: Pod pod-b6a36f53-5246-4827-b7cc-773684d1aa1b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:38:22.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-723" for this suite.

• [SLOW TEST:15.418 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":3892,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:38:22.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-dh665 in namespace proxy-8381
I0104 15:38:22.678012       9 runners.go:189] Created replication controller with name: proxy-service-dh665, namespace: proxy-8381, replica count: 1
I0104 15:38:23.729473       9 runners.go:189] proxy-service-dh665 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 15:38:24.729683       9 runners.go:189] proxy-service-dh665 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 15:38:25.729912       9 runners.go:189] proxy-service-dh665 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 15:38:26.730148       9 runners.go:189] proxy-service-dh665 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 15:38:27.730419       9 runners.go:189] proxy-service-dh665 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 15:38:28.730734       9 runners.go:189] proxy-service-dh665 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 15:38:29.730978       9 runners.go:189] proxy-service-dh665 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 15:38:30.731269       9 runners.go:189] proxy-service-dh665 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0104 15:38:31.731741       9 runners.go:189] proxy-service-dh665 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0104 15:38:32.732279       9 runners.go:189] proxy-service-dh665 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0104 15:38:33.732787       9 runners.go:189] proxy-service-dh665 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0104 15:38:34.733123       9 runners.go:189] proxy-service-dh665 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0104 15:38:35.733397       9 runners.go:189] proxy-service-dh665 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0104 15:38:36.733670       9 runners.go:189] proxy-service-dh665 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0104 15:38:37.733943       9 runners.go:189] proxy-service-dh665 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0104 15:38:38.734165       9 runners.go:189] proxy-service-dh665 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  4 15:38:38.738: INFO: setup took 16.165962327s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan  4 15:38:38.758: INFO: (0) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:1080/proxy/: test<... (200; 20.212263ms)
Jan  4 15:38:38.758: INFO: (0) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:1080/proxy/: ... (200; 20.208324ms)
Jan  4 15:38:38.759: INFO: (0) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx/proxy/: test (200; 21.01734ms)
Jan  4 15:38:38.759: INFO: (0) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname1/proxy/: foo (200; 21.114805ms)
Jan  4 15:38:38.760: INFO: (0) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:160/proxy/: foo (200; 21.867395ms)
Jan  4 15:38:38.760: INFO: (0) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname2/proxy/: bar (200; 22.084114ms)
Jan  4 15:38:38.760: INFO: (0) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname2/proxy/: bar (200; 22.306687ms)
Jan  4 15:38:38.760: INFO: (0) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:162/proxy/: bar (200; 22.527037ms)
Jan  4 15:38:38.760: INFO: (0) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:162/proxy/: bar (200; 22.723845ms)
Jan  4 15:38:38.761: INFO: (0) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname1/proxy/: foo (200; 22.749885ms)
Jan  4 15:38:38.761: INFO: (0) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:160/proxy/: foo (200; 23.439004ms)
Jan  4 15:38:38.773: INFO: (0) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:460/proxy/: tls baz (200; 35.078314ms)
Jan  4 15:38:38.773: INFO: (0) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:462/proxy/: tls qux (200; 35.041959ms)
Jan  4 15:38:38.773: INFO: (0) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname1/proxy/: tls baz (200; 35.169204ms)
Jan  4 15:38:38.774: INFO: (0) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname2/proxy/: tls qux (200; 35.824082ms)
Jan  4 15:38:38.774: INFO: (0) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:443/proxy/: ... (200; 10.930319ms)
Jan  4 15:38:38.786: INFO: (1) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:162/proxy/: bar (200; 11.408684ms)
Jan  4 15:38:38.786: INFO: (1) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname2/proxy/: bar (200; 12.140391ms)
Jan  4 15:38:38.786: INFO: (1) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname1/proxy/: foo (200; 11.995369ms)
Jan  4 15:38:38.786: INFO: (1) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:160/proxy/: foo (200; 11.895186ms)
Jan  4 15:38:38.787: INFO: (1) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx/proxy/: test (200; 12.17959ms)
Jan  4 15:38:38.787: INFO: (1) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:462/proxy/: tls qux (200; 12.795976ms)
Jan  4 15:38:38.787: INFO: (1) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:443/proxy/: test<... (200; 14.937888ms)
Jan  4 15:38:38.789: INFO: (1) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname1/proxy/: tls baz (200; 14.857858ms)
Jan  4 15:38:38.789: INFO: (1) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:162/proxy/: bar (200; 14.354928ms)
Jan  4 15:38:38.789: INFO: (1) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:160/proxy/: foo (200; 15.164341ms)
Jan  4 15:38:38.790: INFO: (1) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname2/proxy/: tls qux (200; 15.621327ms)
Jan  4 15:38:38.798: INFO: (2) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:1080/proxy/: test<... (200; 8.647755ms)
Jan  4 15:38:38.802: INFO: (2) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname2/proxy/: bar (200; 12.633024ms)
Jan  4 15:38:38.802: INFO: (2) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname1/proxy/: foo (200; 12.660886ms)
Jan  4 15:38:38.802: INFO: (2) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname1/proxy/: foo (200; 12.610728ms)
Jan  4 15:38:38.803: INFO: (2) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:160/proxy/: foo (200; 12.934089ms)
Jan  4 15:38:38.803: INFO: (2) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname1/proxy/: tls baz (200; 12.974519ms)
Jan  4 15:38:38.803: INFO: (2) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:162/proxy/: bar (200; 13.222877ms)
Jan  4 15:38:38.804: INFO: (2) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:462/proxy/: tls qux (200; 14.238598ms)
Jan  4 15:38:38.804: INFO: (2) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname2/proxy/: bar (200; 14.418656ms)
Jan  4 15:38:38.804: INFO: (2) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname2/proxy/: tls qux (200; 14.6462ms)
Jan  4 15:38:38.805: INFO: (2) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:443/proxy/: test (200; 15.019237ms)
Jan  4 15:38:38.805: INFO: (2) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:160/proxy/: foo (200; 15.330996ms)
Jan  4 15:38:38.805: INFO: (2) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:1080/proxy/: ... (200; 15.315181ms)
Jan  4 15:38:38.811: INFO: (3) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:162/proxy/: bar (200; 5.839461ms)
Jan  4 15:38:38.811: INFO: (3) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:1080/proxy/: ... (200; 5.88128ms)
Jan  4 15:38:38.811: INFO: (3) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:160/proxy/: foo (200; 5.864245ms)
Jan  4 15:38:38.811: INFO: (3) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:460/proxy/: tls baz (200; 6.025046ms)
Jan  4 15:38:38.811: INFO: (3) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:160/proxy/: foo (200; 6.024355ms)
Jan  4 15:38:38.812: INFO: (3) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:462/proxy/: tls qux (200; 6.656522ms)
Jan  4 15:38:38.812: INFO: (3) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx/proxy/: test (200; 7.124627ms)
Jan  4 15:38:38.813: INFO: (3) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname2/proxy/: bar (200; 7.776975ms)
Jan  4 15:38:38.814: INFO: (3) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:1080/proxy/: test<... (200; 8.507894ms)
Jan  4 15:38:38.814: INFO: (3) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:162/proxy/: bar (200; 8.620276ms)
Jan  4 15:38:38.814: INFO: (3) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:443/proxy/: test (200; 9.005238ms)
Jan  4 15:38:38.829: INFO: (4) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname1/proxy/: foo (200; 11.409863ms)
Jan  4 15:38:38.829: INFO: (4) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname1/proxy/: foo (200; 11.751865ms)
Jan  4 15:38:38.829: INFO: (4) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname2/proxy/: bar (200; 11.918799ms)
Jan  4 15:38:38.829: INFO: (4) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname2/proxy/: bar (200; 12.16091ms)
Jan  4 15:38:38.829: INFO: (4) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname2/proxy/: tls qux (200; 12.063319ms)
Jan  4 15:38:38.830: INFO: (4) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:162/proxy/: bar (200; 12.900018ms)
Jan  4 15:38:38.831: INFO: (4) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:1080/proxy/: ... (200; 13.104115ms)
Jan  4 15:38:38.831: INFO: (4) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:160/proxy/: foo (200; 13.35206ms)
Jan  4 15:38:38.831: INFO: (4) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:162/proxy/: bar (200; 13.469455ms)
Jan  4 15:38:38.831: INFO: (4) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname1/proxy/: tls baz (200; 13.737882ms)
Jan  4 15:38:38.832: INFO: (4) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:460/proxy/: tls baz (200; 14.454344ms)
Jan  4 15:38:38.832: INFO: (4) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:443/proxy/: test<... (200; 15.289045ms)
Jan  4 15:38:38.841: INFO: (5) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:162/proxy/: bar (200; 7.611022ms)
Jan  4 15:38:38.841: INFO: (5) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx/proxy/: test (200; 8.22282ms)
Jan  4 15:38:38.842: INFO: (5) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:162/proxy/: bar (200; 9.116214ms)
Jan  4 15:38:38.842: INFO: (5) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:443/proxy/: test<... (200; 14.751472ms)
Jan  4 15:38:38.848: INFO: (5) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname2/proxy/: bar (200; 15.23444ms)
Jan  4 15:38:38.849: INFO: (5) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname1/proxy/: foo (200; 15.888129ms)
Jan  4 15:38:38.851: INFO: (5) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname1/proxy/: tls baz (200; 18.347442ms)
Jan  4 15:38:38.851: INFO: (5) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:1080/proxy/: ... (200; 17.722936ms)
Jan  4 15:38:38.853: INFO: (5) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname1/proxy/: foo (200; 19.14452ms)
Jan  4 15:38:38.854: INFO: (5) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:160/proxy/: foo (200; 20.180704ms)
Jan  4 15:38:38.854: INFO: (5) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:462/proxy/: tls qux (200; 20.256732ms)
Jan  4 15:38:38.855: INFO: (5) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname2/proxy/: tls qux (200; 22.006287ms)
Jan  4 15:38:38.856: INFO: (5) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname2/proxy/: bar (200; 22.319927ms)
Jan  4 15:38:38.868: INFO: (6) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:1080/proxy/: ... (200; 11.935825ms)
Jan  4 15:38:38.870: INFO: (6) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:160/proxy/: foo (200; 13.511107ms)
Jan  4 15:38:38.870: INFO: (6) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:443/proxy/: test<... (200; 14.083441ms)
Jan  4 15:38:38.870: INFO: (6) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:162/proxy/: bar (200; 14.053244ms)
Jan  4 15:38:38.874: INFO: (6) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname1/proxy/: foo (200; 17.244728ms)
Jan  4 15:38:38.874: INFO: (6) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname2/proxy/: bar (200; 17.181921ms)
Jan  4 15:38:38.874: INFO: (6) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:160/proxy/: foo (200; 17.541328ms)
Jan  4 15:38:38.874: INFO: (6) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname1/proxy/: tls baz (200; 17.861871ms)
Jan  4 15:38:38.875: INFO: (6) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx/proxy/: test (200; 18.097147ms)
Jan  4 15:38:38.875: INFO: (6) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname2/proxy/: bar (200; 18.373118ms)
Jan  4 15:38:38.875: INFO: (6) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname2/proxy/: tls qux (200; 18.945197ms)
Jan  4 15:38:38.875: INFO: (6) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname1/proxy/: foo (200; 18.891572ms)
Jan  4 15:38:38.875: INFO: (6) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:462/proxy/: tls qux (200; 18.887298ms)
Jan  4 15:38:38.945: INFO: (6) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:162/proxy/: bar (200; 88.983051ms)
Jan  4 15:38:38.952: INFO: (7) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:160/proxy/: foo (200; 7.24847ms)
Jan  4 15:38:38.957: INFO: (7) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx/proxy/: test (200; 11.753069ms)
Jan  4 15:38:38.959: INFO: (7) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:162/proxy/: bar (200; 14.008237ms)
Jan  4 15:38:38.959: INFO: (7) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:160/proxy/: foo (200; 14.092907ms)
Jan  4 15:38:38.960: INFO: (7) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:462/proxy/: tls qux (200; 14.65616ms)
Jan  4 15:38:38.960: INFO: (7) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:1080/proxy/: ... (200; 14.755632ms)
Jan  4 15:38:38.960: INFO: (7) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:460/proxy/: tls baz (200; 14.764041ms)
Jan  4 15:38:38.960: INFO: (7) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:1080/proxy/: test<... (200; 14.843898ms)
Jan  4 15:38:38.960: INFO: (7) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname1/proxy/: foo (200; 14.968616ms)
Jan  4 15:38:38.960: INFO: (7) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:162/proxy/: bar (200; 15.039341ms)
Jan  4 15:38:38.960: INFO: (7) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:443/proxy/: test<... (200; 8.517567ms)
Jan  4 15:38:38.974: INFO: (8) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx/proxy/: test (200; 8.520602ms)
Jan  4 15:38:38.974: INFO: (8) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:160/proxy/: foo (200; 8.386535ms)
Jan  4 15:38:38.974: INFO: (8) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:162/proxy/: bar (200; 8.499005ms)
Jan  4 15:38:38.975: INFO: (8) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:443/proxy/: ... (200; 12.006442ms)
Jan  4 15:38:38.977: INFO: (8) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname2/proxy/: bar (200; 12.492117ms)
Jan  4 15:38:38.978: INFO: (8) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname1/proxy/: tls baz (200; 13.012488ms)
Jan  4 15:38:38.983: INFO: (8) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname2/proxy/: tls qux (200; 18.098258ms)
Jan  4 15:38:38.983: INFO: (8) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname2/proxy/: bar (200; 18.23681ms)
Jan  4 15:38:38.983: INFO: (8) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname1/proxy/: foo (200; 18.271325ms)
Jan  4 15:38:38.983: INFO: (8) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname1/proxy/: foo (200; 18.511564ms)
Jan  4 15:38:38.990: INFO: (9) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname2/proxy/: bar (200; 6.690099ms)
Jan  4 15:38:38.991: INFO: (9) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname1/proxy/: foo (200; 7.206193ms)
Jan  4 15:38:38.992: INFO: (9) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname1/proxy/: foo (200; 8.146019ms)
Jan  4 15:38:38.995: INFO: (9) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx/proxy/: test (200; 11.651177ms)
Jan  4 15:38:38.995: INFO: (9) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:160/proxy/: foo (200; 11.783273ms)
Jan  4 15:38:38.996: INFO: (9) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:1080/proxy/: ... (200; 11.877382ms)
Jan  4 15:38:38.996: INFO: (9) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:162/proxy/: bar (200; 12.112791ms)
Jan  4 15:38:38.996: INFO: (9) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:1080/proxy/: test<... (200; 12.11642ms)
Jan  4 15:38:38.996: INFO: (9) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:460/proxy/: tls baz (200; 12.483836ms)
Jan  4 15:38:38.996: INFO: (9) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname1/proxy/: tls baz (200; 12.477277ms)
Jan  4 15:38:38.996: INFO: (9) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:443/proxy/: test<... (200; 5.765784ms)
Jan  4 15:38:39.003: INFO: (10) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:160/proxy/: foo (200; 5.710705ms)
Jan  4 15:38:39.003: INFO: (10) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:462/proxy/: tls qux (200; 5.816336ms)
Jan  4 15:38:39.004: INFO: (10) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:162/proxy/: bar (200; 6.719558ms)
Jan  4 15:38:39.004: INFO: (10) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:162/proxy/: bar (200; 6.73782ms)
Jan  4 15:38:39.004: INFO: (10) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:460/proxy/: tls baz (200; 6.801095ms)
Jan  4 15:38:39.005: INFO: (10) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname1/proxy/: tls baz (200; 8.053747ms)
Jan  4 15:38:39.005: INFO: (10) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx/proxy/: test (200; 8.029954ms)
Jan  4 15:38:39.006: INFO: (10) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname2/proxy/: bar (200; 8.529774ms)
Jan  4 15:38:39.006: INFO: (10) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname2/proxy/: bar (200; 9.023191ms)
Jan  4 15:38:39.007: INFO: (10) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname1/proxy/: foo (200; 9.288187ms)
Jan  4 15:38:39.007: INFO: (10) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:1080/proxy/: ... (200; 9.373574ms)
Jan  4 15:38:39.007: INFO: (10) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:443/proxy/: test (200; 8.381551ms)
Jan  4 15:38:39.018: INFO: (11) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:460/proxy/: tls baz (200; 9.390413ms)
Jan  4 15:38:39.018: INFO: (11) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:1080/proxy/: test<... (200; 9.369322ms)
Jan  4 15:38:39.018: INFO: (11) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:160/proxy/: foo (200; 10.146978ms)
Jan  4 15:38:39.019: INFO: (11) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname2/proxy/: bar (200; 10.767607ms)
Jan  4 15:38:39.019: INFO: (11) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:1080/proxy/: ... (200; 10.788461ms)
Jan  4 15:38:39.020: INFO: (11) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname1/proxy/: foo (200; 11.401628ms)
Jan  4 15:38:39.020: INFO: (11) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname1/proxy/: tls baz (200; 11.452366ms)
Jan  4 15:38:39.020: INFO: (11) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname2/proxy/: tls qux (200; 11.447554ms)
Jan  4 15:38:39.020: INFO: (11) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname1/proxy/: foo (200; 11.528406ms)
Jan  4 15:38:39.020: INFO: (11) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname2/proxy/: bar (200; 11.482643ms)
Jan  4 15:38:39.025: INFO: (12) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:160/proxy/: foo (200; 4.993914ms)
Jan  4 15:38:39.026: INFO: (12) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:162/proxy/: bar (200; 5.784925ms)
Jan  4 15:38:39.026: INFO: (12) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:462/proxy/: tls qux (200; 5.961541ms)
Jan  4 15:38:39.027: INFO: (12) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:1080/proxy/: ... (200; 6.839442ms)
Jan  4 15:38:39.027: INFO: (12) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:160/proxy/: foo (200; 6.915864ms)
Jan  4 15:38:39.027: INFO: (12) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:443/proxy/: test<... (200; 8.251642ms)
Jan  4 15:38:39.028: INFO: (12) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:162/proxy/: bar (200; 8.301533ms)
Jan  4 15:38:39.028: INFO: (12) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:460/proxy/: tls baz (200; 8.294043ms)
Jan  4 15:38:39.029: INFO: (12) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname2/proxy/: bar (200; 8.741367ms)
Jan  4 15:38:39.029: INFO: (12) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname1/proxy/: foo (200; 9.093579ms)
Jan  4 15:38:39.029: INFO: (12) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname2/proxy/: bar (200; 9.307856ms)
Jan  4 15:38:39.029: INFO: (12) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname2/proxy/: tls qux (200; 9.353013ms)
Jan  4 15:38:39.029: INFO: (12) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname1/proxy/: foo (200; 9.352038ms)
Jan  4 15:38:39.029: INFO: (12) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx/proxy/: test (200; 9.39468ms)
Jan  4 15:38:39.029: INFO: (12) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname1/proxy/: tls baz (200; 9.444476ms)
Jan  4 15:38:39.033: INFO: (13) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx/proxy/: test (200; 4.051691ms)
Jan  4 15:38:39.034: INFO: (13) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:462/proxy/: tls qux (200; 4.567545ms)
Jan  4 15:38:39.037: INFO: (13) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:162/proxy/: bar (200; 8.002592ms)
Jan  4 15:38:39.039: INFO: (13) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:460/proxy/: tls baz (200; 9.93211ms)
Jan  4 15:38:39.040: INFO: (13) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:443/proxy/: ... (200; 10.812425ms)
Jan  4 15:38:39.040: INFO: (13) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:160/proxy/: foo (200; 11.106828ms)
Jan  4 15:38:39.041: INFO: (13) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:1080/proxy/: test<... (200; 11.376639ms)
Jan  4 15:38:39.041: INFO: (13) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:160/proxy/: foo (200; 11.865346ms)
Jan  4 15:38:39.041: INFO: (13) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:162/proxy/: bar (200; 12.059631ms)
Jan  4 15:38:39.042: INFO: (13) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname1/proxy/: tls baz (200; 12.560299ms)
Jan  4 15:38:39.042: INFO: (13) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname2/proxy/: tls qux (200; 12.573902ms)
Jan  4 15:38:39.043: INFO: (13) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname1/proxy/: foo (200; 13.206757ms)
Jan  4 15:38:39.043: INFO: (13) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname1/proxy/: foo (200; 13.259001ms)
Jan  4 15:38:39.043: INFO: (13) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname2/proxy/: bar (200; 13.25939ms)
Jan  4 15:38:39.049: INFO: (14) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:460/proxy/: tls baz (200; 6.471405ms)
Jan  4 15:38:39.050: INFO: (14) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:462/proxy/: tls qux (200; 6.73374ms)
Jan  4 15:38:39.051: INFO: (14) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:160/proxy/: foo (200; 7.80028ms)
Jan  4 15:38:39.051: INFO: (14) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:160/proxy/: foo (200; 7.935962ms)
Jan  4 15:38:39.051: INFO: (14) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:1080/proxy/: ... (200; 8.151103ms)
Jan  4 15:38:39.051: INFO: (14) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx/proxy/: test (200; 8.414978ms)
Jan  4 15:38:39.052: INFO: (14) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:443/proxy/: test<... (200; 9.075156ms)
Jan  4 15:38:39.052: INFO: (14) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname1/proxy/: foo (200; 9.539833ms)
Jan  4 15:38:39.053: INFO: (14) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:162/proxy/: bar (200; 9.92147ms)
Jan  4 15:38:39.053: INFO: (14) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:162/proxy/: bar (200; 9.954656ms)
Jan  4 15:38:39.054: INFO: (14) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname1/proxy/: tls baz (200; 11.551474ms)
Jan  4 15:38:39.054: INFO: (14) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname2/proxy/: bar (200; 11.630947ms)
Jan  4 15:38:39.054: INFO: (14) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname2/proxy/: bar (200; 11.553043ms)
Jan  4 15:38:39.054: INFO: (14) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname1/proxy/: foo (200; 11.540892ms)
Jan  4 15:38:39.056: INFO: (14) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname2/proxy/: tls qux (200; 13.041723ms)
Jan  4 15:38:39.069: INFO: (15) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:1080/proxy/: test<... (200; 12.711309ms)
Jan  4 15:38:39.069: INFO: (15) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:460/proxy/: tls baz (200; 13.005431ms)
Jan  4 15:38:39.072: INFO: (15) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:160/proxy/: foo (200; 16.328774ms)
Jan  4 15:38:39.073: INFO: (15) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:162/proxy/: bar (200; 16.611183ms)
Jan  4 15:38:39.073: INFO: (15) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx/proxy/: test (200; 16.570023ms)
Jan  4 15:38:39.073: INFO: (15) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:160/proxy/: foo (200; 16.610533ms)
Jan  4 15:38:39.073: INFO: (15) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:162/proxy/: bar (200; 16.57676ms)
Jan  4 15:38:39.074: INFO: (15) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname1/proxy/: tls baz (200; 17.445052ms)
Jan  4 15:38:39.074: INFO: (15) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname1/proxy/: foo (200; 17.675631ms)
Jan  4 15:38:39.074: INFO: (15) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:1080/proxy/: ... (200; 17.868076ms)
Jan  4 15:38:39.074: INFO: (15) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname2/proxy/: bar (200; 17.753949ms)
Jan  4 15:38:39.074: INFO: (15) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:462/proxy/: tls qux (200; 17.800834ms)
Jan  4 15:38:39.074: INFO: (15) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:443/proxy/: ... (200; 9.523357ms)
Jan  4 15:38:39.085: INFO: (16) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:160/proxy/: foo (200; 9.67246ms)
Jan  4 15:38:39.085: INFO: (16) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:462/proxy/: tls qux (200; 9.654661ms)
Jan  4 15:38:39.088: INFO: (16) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:1080/proxy/: test<... (200; 12.341531ms)
Jan  4 15:38:39.088: INFO: (16) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:443/proxy/: test (200; 13.328344ms)
Jan  4 15:38:39.089: INFO: (16) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname2/proxy/: tls qux (200; 13.236037ms)
Jan  4 15:38:39.090: INFO: (16) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname1/proxy/: foo (200; 14.325123ms)
Jan  4 15:38:39.091: INFO: (16) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname1/proxy/: tls baz (200; 15.087417ms)
Jan  4 15:38:39.092: INFO: (16) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname1/proxy/: foo (200; 16.027811ms)
Jan  4 15:38:39.093: INFO: (16) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname2/proxy/: bar (200; 16.894087ms)
Jan  4 15:38:39.093: INFO: (16) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname2/proxy/: bar (200; 17.092077ms)
Jan  4 15:38:39.101: INFO: (17) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:1080/proxy/: test<... (200; 8.203704ms)
Jan  4 15:38:39.101: INFO: (17) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:162/proxy/: bar (200; 8.083243ms)
Jan  4 15:38:39.102: INFO: (17) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:1080/proxy/: ... (200; 9.228544ms)
Jan  4 15:38:39.102: INFO: (17) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:162/proxy/: bar (200; 9.194746ms)
Jan  4 15:38:39.102: INFO: (17) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx/proxy/: test (200; 9.417947ms)
Jan  4 15:38:39.102: INFO: (17) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:443/proxy/: test (200; 11.801197ms)
Jan  4 15:38:39.118: INFO: (18) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:162/proxy/: bar (200; 11.89707ms)
Jan  4 15:38:39.119: INFO: (18) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname2/proxy/: bar (200; 12.244796ms)
Jan  4 15:38:39.119: INFO: (18) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:462/proxy/: tls qux (200; 12.583539ms)
Jan  4 15:38:39.119: INFO: (18) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:1080/proxy/: test<... (200; 12.53869ms)
Jan  4 15:38:39.119: INFO: (18) /api/v1/namespaces/proxy-8381/services/http:proxy-service-dh665:portname1/proxy/: foo (200; 12.673373ms)
Jan  4 15:38:39.119: INFO: (18) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:1080/proxy/: ... (200; 12.580611ms)
Jan  4 15:38:39.119: INFO: (18) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:460/proxy/: tls baz (200; 12.699275ms)
Jan  4 15:38:39.119: INFO: (18) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:160/proxy/: foo (200; 12.6746ms)
Jan  4 15:38:39.132: INFO: (19) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:1080/proxy/: test<... (200; 12.227681ms)
Jan  4 15:38:39.132: INFO: (19) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:160/proxy/: foo (200; 12.280329ms)
Jan  4 15:38:39.132: INFO: (19) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:460/proxy/: tls baz (200; 12.29905ms)
Jan  4 15:38:39.132: INFO: (19) /api/v1/namespaces/proxy-8381/services/https:proxy-service-dh665:tlsportname2/proxy/: tls qux (200; 12.257576ms)
Jan  4 15:38:39.132: INFO: (19) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:162/proxy/: bar (200; 12.781244ms)
Jan  4 15:38:39.132: INFO: (19) /api/v1/namespaces/proxy-8381/pods/http:proxy-service-dh665-rmhmx:1080/proxy/: ... (200; 12.816983ms)
Jan  4 15:38:39.143: INFO: (19) /api/v1/namespaces/proxy-8381/pods/proxy-service-dh665-rmhmx:162/proxy/: bar (200; 23.50926ms)
Jan  4 15:38:39.143: INFO: (19) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:462/proxy/: tls qux (200; 24.031221ms)
Jan  4 15:38:39.144: INFO: (19) /api/v1/namespaces/proxy-8381/pods/https:proxy-service-dh665-rmhmx:443/proxy/: test (200; 25.459878ms)
Jan  4 15:38:39.145: INFO: (19) /api/v1/namespaces/proxy-8381/services/proxy-service-dh665:portname1/proxy/: foo (200; 25.524134ms)
STEP: deleting ReplicationController proxy-service-dh665 in namespace proxy-8381, will wait for the garbage collector to delete the pods
Jan  4 15:38:39.209: INFO: Deleting ReplicationController proxy-service-dh665 took: 10.480322ms
Jan  4 15:38:39.509: INFO: Terminating ReplicationController proxy-service-dh665 pods took: 300.38989ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:38:52.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8381" for this suite.

• [SLOW TEST:29.914 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":251,"skipped":3935,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:38:52.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:38:52.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8088" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":3958,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:38:52.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:39:42.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7471" for this suite.

• [SLOW TEST:50.361 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4017,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:39:42.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  4 15:39:59.408: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:39:59.419: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:40:01.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:40:01.427: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:40:03.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:40:03.428: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:40:05.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:40:05.426: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:40:07.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:40:07.429: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:40:09.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:40:09.424: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:40:11.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:40:11.425: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:40:13.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:40:13.425: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:40:13.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3996" for this suite.

• [SLOW TEST:30.479 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4038,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:40:13.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-1039
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Jan  4 15:40:13.544: INFO: Found 0 stateful pods, waiting for 3
Jan  4 15:40:23.719: INFO: Found 2 stateful pods, waiting for 3
Jan  4 15:40:33.549: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 15:40:33.549: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 15:40:33.549: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  4 15:40:43.550: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 15:40:43.550: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 15:40:43.550: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jan  4 15:40:43.584: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan  4 15:40:54.367: INFO: Updating stateful set ss2
Jan  4 15:40:54.585: INFO: Waiting for Pod statefulset-1039/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jan  4 15:41:04.797: INFO: Found 2 stateful pods, waiting for 3
Jan  4 15:41:14.805: INFO: Found 2 stateful pods, waiting for 3
Jan  4 15:41:24.803: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 15:41:24.803: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 15:41:24.803: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan  4 15:41:24.834: INFO: Updating stateful set ss2
Jan  4 15:41:25.377: INFO: Waiting for Pod statefulset-1039/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan  4 15:41:36.321: INFO: Updating stateful set ss2
Jan  4 15:41:36.402: INFO: Waiting for StatefulSet statefulset-1039/ss2 to complete update
Jan  4 15:41:36.403: INFO: Waiting for Pod statefulset-1039/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan  4 15:41:46.412: INFO: Waiting for StatefulSet statefulset-1039/ss2 to complete update
Jan  4 15:41:46.412: INFO: Waiting for Pod statefulset-1039/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan  4 15:41:56.410: INFO: Waiting for StatefulSet statefulset-1039/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan  4 15:42:06.412: INFO: Deleting all statefulset in ns statefulset-1039
Jan  4 15:42:06.416: INFO: Scaling statefulset ss2 to 0
Jan  4 15:42:46.448: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 15:42:46.453: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:42:46.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1039" for this suite.

• [SLOW TEST:153.078 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":255,"skipped":4088,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:42:46.516: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan  4 15:43:06.678: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6025 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 15:43:06.678: INFO: >>> kubeConfig: /root/.kube/config
I0104 15:43:06.717424       9 log.go:172] (0xc002b20840) (0xc00181ca00) Create stream
I0104 15:43:06.717467       9 log.go:172] (0xc002b20840) (0xc00181ca00) Stream added, broadcasting: 1
I0104 15:43:06.720549       9 log.go:172] (0xc002b20840) Reply frame received for 1
I0104 15:43:06.720587       9 log.go:172] (0xc002b20840) (0xc001266dc0) Create stream
I0104 15:43:06.720597       9 log.go:172] (0xc002b20840) (0xc001266dc0) Stream added, broadcasting: 3
I0104 15:43:06.721955       9 log.go:172] (0xc002b20840) Reply frame received for 3
I0104 15:43:06.721969       9 log.go:172] (0xc002b20840) (0xc00181caa0) Create stream
I0104 15:43:06.721989       9 log.go:172] (0xc002b20840) (0xc00181caa0) Stream added, broadcasting: 5
I0104 15:43:06.727467       9 log.go:172] (0xc002b20840) Reply frame received for 5
I0104 15:43:06.791227       9 log.go:172] (0xc002b20840) Data frame received for 3
I0104 15:43:06.791306       9 log.go:172] (0xc001266dc0) (3) Data frame handling
I0104 15:43:06.791320       9 log.go:172] (0xc001266dc0) (3) Data frame sent
I0104 15:43:06.890655       9 log.go:172] (0xc002b20840) (0xc001266dc0) Stream removed, broadcasting: 3
I0104 15:43:06.890825       9 log.go:172] (0xc002b20840) Data frame received for 1
I0104 15:43:06.890839       9 log.go:172] (0xc00181ca00) (1) Data frame handling
I0104 15:43:06.890853       9 log.go:172] (0xc00181ca00) (1) Data frame sent
I0104 15:43:06.890862       9 log.go:172] (0xc002b20840) (0xc00181ca00) Stream removed, broadcasting: 1
I0104 15:43:06.891067       9 log.go:172] (0xc002b20840) (0xc00181caa0) Stream removed, broadcasting: 5
I0104 15:43:06.891200       9 log.go:172] (0xc002b20840) Go away received
I0104 15:43:06.891369       9 log.go:172] (0xc002b20840) (0xc00181ca00) Stream removed, broadcasting: 1
I0104 15:43:06.891409       9 log.go:172] (0xc002b20840) (0xc001266dc0) Stream removed, broadcasting: 3
I0104 15:43:06.891448       9 log.go:172] (0xc002b20840) (0xc00181caa0) Stream removed, broadcasting: 5
Jan  4 15:43:06.891: INFO: Exec stderr: ""
Jan  4 15:43:06.891: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6025 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 15:43:06.891: INFO: >>> kubeConfig: /root/.kube/config
I0104 15:43:06.945244       9 log.go:172] (0xc002b20dc0) (0xc00181cc80) Create stream
I0104 15:43:06.945460       9 log.go:172] (0xc002b20dc0) (0xc00181cc80) Stream added, broadcasting: 1
I0104 15:43:06.949725       9 log.go:172] (0xc002b20dc0) Reply frame received for 1
I0104 15:43:06.949801       9 log.go:172] (0xc002b20dc0) (0xc0019f8500) Create stream
I0104 15:43:06.949810       9 log.go:172] (0xc002b20dc0) (0xc0019f8500) Stream added, broadcasting: 3
I0104 15:43:06.951980       9 log.go:172] (0xc002b20dc0) Reply frame received for 3
I0104 15:43:06.952048       9 log.go:172] (0xc002b20dc0) (0xc0019f8640) Create stream
I0104 15:43:06.952057       9 log.go:172] (0xc002b20dc0) (0xc0019f8640) Stream added, broadcasting: 5
I0104 15:43:06.954964       9 log.go:172] (0xc002b20dc0) Reply frame received for 5
I0104 15:43:07.041031       9 log.go:172] (0xc002b20dc0) Data frame received for 3
I0104 15:43:07.041128       9 log.go:172] (0xc0019f8500) (3) Data frame handling
I0104 15:43:07.041152       9 log.go:172] (0xc0019f8500) (3) Data frame sent
I0104 15:43:07.137353       9 log.go:172] (0xc002b20dc0) (0xc0019f8500) Stream removed, broadcasting: 3
I0104 15:43:07.137547       9 log.go:172] (0xc002b20dc0) Data frame received for 1
I0104 15:43:07.137563       9 log.go:172] (0xc00181cc80) (1) Data frame handling
I0104 15:43:07.137579       9 log.go:172] (0xc00181cc80) (1) Data frame sent
I0104 15:43:07.137628       9 log.go:172] (0xc002b20dc0) (0xc00181cc80) Stream removed, broadcasting: 1
I0104 15:43:07.137761       9 log.go:172] (0xc002b20dc0) (0xc0019f8640) Stream removed, broadcasting: 5
I0104 15:43:07.137808       9 log.go:172] (0xc002b20dc0) (0xc00181cc80) Stream removed, broadcasting: 1
I0104 15:43:07.137818       9 log.go:172] (0xc002b20dc0) (0xc0019f8500) Stream removed, broadcasting: 3
I0104 15:43:07.137829       9 log.go:172] (0xc002b20dc0) (0xc0019f8640) Stream removed, broadcasting: 5
I0104 15:43:07.138087       9 log.go:172] (0xc002b20dc0) Go away received
Jan  4 15:43:07.138: INFO: Exec stderr: ""
Jan  4 15:43:07.138: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6025 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 15:43:07.138: INFO: >>> kubeConfig: /root/.kube/config
I0104 15:43:07.182175       9 log.go:172] (0xc002597e40) (0xc0019f8aa0) Create stream
I0104 15:43:07.182233       9 log.go:172] (0xc002597e40) (0xc0019f8aa0) Stream added, broadcasting: 1
I0104 15:43:07.185063       9 log.go:172] (0xc002597e40) Reply frame received for 1
I0104 15:43:07.185128       9 log.go:172] (0xc002597e40) (0xc001266e60) Create stream
I0104 15:43:07.185137       9 log.go:172] (0xc002597e40) (0xc001266e60) Stream added, broadcasting: 3
I0104 15:43:07.186509       9 log.go:172] (0xc002597e40) Reply frame received for 3
I0104 15:43:07.186581       9 log.go:172] (0xc002597e40) (0xc0019f8be0) Create stream
I0104 15:43:07.186600       9 log.go:172] (0xc002597e40) (0xc0019f8be0) Stream added, broadcasting: 5
I0104 15:43:07.188018       9 log.go:172] (0xc002597e40) Reply frame received for 5
I0104 15:43:07.245710       9 log.go:172] (0xc002597e40) Data frame received for 3
I0104 15:43:07.245829       9 log.go:172] (0xc001266e60) (3) Data frame handling
I0104 15:43:07.245865       9 log.go:172] (0xc001266e60) (3) Data frame sent
I0104 15:43:07.316045       9 log.go:172] (0xc002597e40) (0xc001266e60) Stream removed, broadcasting: 3
I0104 15:43:07.316155       9 log.go:172] (0xc002597e40) Data frame received for 1
I0104 15:43:07.316182       9 log.go:172] (0xc0019f8aa0) (1) Data frame handling
I0104 15:43:07.316190       9 log.go:172] (0xc0019f8aa0) (1) Data frame sent
I0104 15:43:07.316222       9 log.go:172] (0xc002597e40) (0xc0019f8aa0) Stream removed, broadcasting: 1
I0104 15:43:07.316243       9 log.go:172] (0xc002597e40) (0xc0019f8be0) Stream removed, broadcasting: 5
I0104 15:43:07.316356       9 log.go:172] (0xc002597e40) Go away received
I0104 15:43:07.316436       9 log.go:172] (0xc002597e40) (0xc0019f8aa0) Stream removed, broadcasting: 1
I0104 15:43:07.316468       9 log.go:172] (0xc002597e40) (0xc001266e60) Stream removed, broadcasting: 3
I0104 15:43:07.316485       9 log.go:172] (0xc002597e40) (0xc0019f8be0) Stream removed, broadcasting: 5
Jan  4 15:43:07.316: INFO: Exec stderr: ""
Jan  4 15:43:07.316: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6025 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 15:43:07.316: INFO: >>> kubeConfig: /root/.kube/config
I0104 15:43:07.351870       9 log.go:172] (0xc0027d2420) (0xc001267400) Create stream
I0104 15:43:07.351908       9 log.go:172] (0xc0027d2420) (0xc001267400) Stream added, broadcasting: 1
I0104 15:43:07.355445       9 log.go:172] (0xc0027d2420) Reply frame received for 1
I0104 15:43:07.355476       9 log.go:172] (0xc0027d2420) (0xc00181ce60) Create stream
I0104 15:43:07.355487       9 log.go:172] (0xc0027d2420) (0xc00181ce60) Stream added, broadcasting: 3
I0104 15:43:07.356542       9 log.go:172] (0xc0027d2420) Reply frame received for 3
I0104 15:43:07.356564       9 log.go:172] (0xc0027d2420) (0xc0019f8dc0) Create stream
I0104 15:43:07.356573       9 log.go:172] (0xc0027d2420) (0xc0019f8dc0) Stream added, broadcasting: 5
I0104 15:43:07.357492       9 log.go:172] (0xc0027d2420) Reply frame received for 5
I0104 15:43:07.429504       9 log.go:172] (0xc0027d2420) Data frame received for 3
I0104 15:43:07.429548       9 log.go:172] (0xc00181ce60) (3) Data frame handling
I0104 15:43:07.429566       9 log.go:172] (0xc00181ce60) (3) Data frame sent
I0104 15:43:07.501556       9 log.go:172] (0xc0027d2420) (0xc00181ce60) Stream removed, broadcasting: 3
I0104 15:43:07.501638       9 log.go:172] (0xc0027d2420) Data frame received for 1
I0104 15:43:07.501646       9 log.go:172] (0xc001267400) (1) Data frame handling
I0104 15:43:07.501657       9 log.go:172] (0xc001267400) (1) Data frame sent
I0104 15:43:07.501665       9 log.go:172] (0xc0027d2420) (0xc001267400) Stream removed, broadcasting: 1
I0104 15:43:07.501781       9 log.go:172] (0xc0027d2420) (0xc0019f8dc0) Stream removed, broadcasting: 5
I0104 15:43:07.501804       9 log.go:172] (0xc0027d2420) (0xc001267400) Stream removed, broadcasting: 1
I0104 15:43:07.501813       9 log.go:172] (0xc0027d2420) (0xc00181ce60) Stream removed, broadcasting: 3
I0104 15:43:07.501822       9 log.go:172] (0xc0027d2420) (0xc0019f8dc0) Stream removed, broadcasting: 5
Jan  4 15:43:07.502: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan  4 15:43:07.502: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6025 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 15:43:07.502: INFO: >>> kubeConfig: /root/.kube/config
I0104 15:43:07.502706       9 log.go:172] (0xc0027d2420) Go away received
I0104 15:43:07.536013       9 log.go:172] (0xc0027d2a50) (0xc0012677c0) Create stream
I0104 15:43:07.536207       9 log.go:172] (0xc0027d2a50) (0xc0012677c0) Stream added, broadcasting: 1
I0104 15:43:07.539916       9 log.go:172] (0xc0027d2a50) Reply frame received for 1
I0104 15:43:07.539943       9 log.go:172] (0xc0027d2a50) (0xc00112a000) Create stream
I0104 15:43:07.539951       9 log.go:172] (0xc0027d2a50) (0xc00112a000) Stream added, broadcasting: 3
I0104 15:43:07.540831       9 log.go:172] (0xc0027d2a50) Reply frame received for 3
I0104 15:43:07.540849       9 log.go:172] (0xc0027d2a50) (0xc00181cfa0) Create stream
I0104 15:43:07.540857       9 log.go:172] (0xc0027d2a50) (0xc00181cfa0) Stream added, broadcasting: 5
I0104 15:43:07.542040       9 log.go:172] (0xc0027d2a50) Reply frame received for 5
I0104 15:43:07.599789       9 log.go:172] (0xc0027d2a50) Data frame received for 3
I0104 15:43:07.599863       9 log.go:172] (0xc00112a000) (3) Data frame handling
I0104 15:43:07.599876       9 log.go:172] (0xc00112a000) (3) Data frame sent
I0104 15:43:07.682841       9 log.go:172] (0xc0027d2a50) (0xc00112a000) Stream removed, broadcasting: 3
I0104 15:43:07.682960       9 log.go:172] (0xc0027d2a50) (0xc00181cfa0) Stream removed, broadcasting: 5
I0104 15:43:07.682998       9 log.go:172] (0xc0027d2a50) Data frame received for 1
I0104 15:43:07.683020       9 log.go:172] (0xc0012677c0) (1) Data frame handling
I0104 15:43:07.683038       9 log.go:172] (0xc0012677c0) (1) Data frame sent
I0104 15:43:07.683051       9 log.go:172] (0xc0027d2a50) (0xc0012677c0) Stream removed, broadcasting: 1
I0104 15:43:07.683081       9 log.go:172] (0xc0027d2a50) Go away received
I0104 15:43:07.683234       9 log.go:172] (0xc0027d2a50) (0xc0012677c0) Stream removed, broadcasting: 1
I0104 15:43:07.683400       9 log.go:172] (0xc0027d2a50) (0xc00112a000) Stream removed, broadcasting: 3
I0104 15:43:07.683429       9 log.go:172] (0xc0027d2a50) (0xc00181cfa0) Stream removed, broadcasting: 5
Jan  4 15:43:07.683: INFO: Exec stderr: ""
Jan  4 15:43:07.683: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6025 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 15:43:07.683: INFO: >>> kubeConfig: /root/.kube/config
I0104 15:43:07.716530       9 log.go:172] (0xc002b213f0) (0xc00181d400) Create stream
I0104 15:43:07.716772       9 log.go:172] (0xc002b213f0) (0xc00181d400) Stream added, broadcasting: 1
I0104 15:43:07.721397       9 log.go:172] (0xc002b213f0) Reply frame received for 1
I0104 15:43:07.721420       9 log.go:172] (0xc002b213f0) (0xc00181d540) Create stream
I0104 15:43:07.721429       9 log.go:172] (0xc002b213f0) (0xc00181d540) Stream added, broadcasting: 3
I0104 15:43:07.722649       9 log.go:172] (0xc002b213f0) Reply frame received for 3
I0104 15:43:07.722672       9 log.go:172] (0xc002b213f0) (0xc0019f90e0) Create stream
I0104 15:43:07.722682       9 log.go:172] (0xc002b213f0) (0xc0019f90e0) Stream added, broadcasting: 5
I0104 15:43:07.723817       9 log.go:172] (0xc002b213f0) Reply frame received for 5
I0104 15:43:07.808166       9 log.go:172] (0xc002b213f0) Data frame received for 3
I0104 15:43:07.808250       9 log.go:172] (0xc00181d540) (3) Data frame handling
I0104 15:43:07.808282       9 log.go:172] (0xc00181d540) (3) Data frame sent
I0104 15:43:07.902254       9 log.go:172] (0xc002b213f0) (0xc0019f90e0) Stream removed, broadcasting: 5
I0104 15:43:07.902478       9 log.go:172] (0xc002b213f0) Data frame received for 1
I0104 15:43:07.902588       9 log.go:172] (0xc002b213f0) (0xc00181d540) Stream removed, broadcasting: 3
I0104 15:43:07.902649       9 log.go:172] (0xc00181d400) (1) Data frame handling
I0104 15:43:07.902662       9 log.go:172] (0xc00181d400) (1) Data frame sent
I0104 15:43:07.902677       9 log.go:172] (0xc002b213f0) (0xc00181d400) Stream removed, broadcasting: 1
I0104 15:43:07.902813       9 log.go:172] (0xc002b213f0) (0xc00181d400) Stream removed, broadcasting: 1
I0104 15:43:07.902821       9 log.go:172] (0xc002b213f0) (0xc00181d540) Stream removed, broadcasting: 3
I0104 15:43:07.902830       9 log.go:172] (0xc002b213f0) (0xc0019f90e0) Stream removed, broadcasting: 5
Jan  4 15:43:07.903: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan  4 15:43:07.903: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6025 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 15:43:07.903: INFO: >>> kubeConfig: /root/.kube/config
I0104 15:43:07.905884       9 log.go:172] (0xc002b213f0) Go away received
I0104 15:43:07.944004       9 log.go:172] (0xc005c9a4d0) (0xc0019f94a0) Create stream
I0104 15:43:07.944179       9 log.go:172] (0xc005c9a4d0) (0xc0019f94a0) Stream added, broadcasting: 1
I0104 15:43:07.951071       9 log.go:172] (0xc005c9a4d0) Reply frame received for 1
I0104 15:43:07.951120       9 log.go:172] (0xc005c9a4d0) (0xc00112a1e0) Create stream
I0104 15:43:07.951129       9 log.go:172] (0xc005c9a4d0) (0xc00112a1e0) Stream added, broadcasting: 3
I0104 15:43:07.952539       9 log.go:172] (0xc005c9a4d0) Reply frame received for 3
I0104 15:43:07.952650       9 log.go:172] (0xc005c9a4d0) (0xc00181d900) Create stream
I0104 15:43:07.952664       9 log.go:172] (0xc005c9a4d0) (0xc00181d900) Stream added, broadcasting: 5
I0104 15:43:07.954307       9 log.go:172] (0xc005c9a4d0) Reply frame received for 5
I0104 15:43:08.022185       9 log.go:172] (0xc005c9a4d0) Data frame received for 3
I0104 15:43:08.022230       9 log.go:172] (0xc00112a1e0) (3) Data frame handling
I0104 15:43:08.022242       9 log.go:172] (0xc00112a1e0) (3) Data frame sent
I0104 15:43:08.085232       9 log.go:172] (0xc005c9a4d0) Data frame received for 1
I0104 15:43:08.085296       9 log.go:172] (0xc005c9a4d0) (0xc00181d900) Stream removed, broadcasting: 5
I0104 15:43:08.085352       9 log.go:172] (0xc0019f94a0) (1) Data frame handling
I0104 15:43:08.085365       9 log.go:172] (0xc0019f94a0) (1) Data frame sent
I0104 15:43:08.085377       9 log.go:172] (0xc005c9a4d0) (0xc00112a1e0) Stream removed, broadcasting: 3
I0104 15:43:08.085392       9 log.go:172] (0xc005c9a4d0) (0xc0019f94a0) Stream removed, broadcasting: 1
I0104 15:43:08.085402       9 log.go:172] (0xc005c9a4d0) Go away received
I0104 15:43:08.085461       9 log.go:172] (0xc005c9a4d0) (0xc0019f94a0) Stream removed, broadcasting: 1
I0104 15:43:08.085470       9 log.go:172] (0xc005c9a4d0) (0xc00112a1e0) Stream removed, broadcasting: 3
I0104 15:43:08.085474       9 log.go:172] (0xc005c9a4d0) (0xc00181d900) Stream removed, broadcasting: 5
Jan  4 15:43:08.085: INFO: Exec stderr: ""
Jan  4 15:43:08.085: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6025 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 15:43:08.085: INFO: >>> kubeConfig: /root/.kube/config
I0104 15:43:08.125397       9 log.go:172] (0xc005c9ab00) (0xc0019f9900) Create stream
I0104 15:43:08.125469       9 log.go:172] (0xc005c9ab00) (0xc0019f9900) Stream added, broadcasting: 1
I0104 15:43:08.131209       9 log.go:172] (0xc005c9ab00) Reply frame received for 1
I0104 15:43:08.131231       9 log.go:172] (0xc005c9ab00) (0xc0019f99a0) Create stream
I0104 15:43:08.131237       9 log.go:172] (0xc005c9ab00) (0xc0019f99a0) Stream added, broadcasting: 3
I0104 15:43:08.132430       9 log.go:172] (0xc005c9ab00) Reply frame received for 3
I0104 15:43:08.132453       9 log.go:172] (0xc005c9ab00) (0xc00112a820) Create stream
I0104 15:43:08.132463       9 log.go:172] (0xc005c9ab00) (0xc00112a820) Stream added, broadcasting: 5
I0104 15:43:08.133705       9 log.go:172] (0xc005c9ab00) Reply frame received for 5
I0104 15:43:08.194037       9 log.go:172] (0xc005c9ab00) Data frame received for 3
I0104 15:43:08.194075       9 log.go:172] (0xc0019f99a0) (3) Data frame handling
I0104 15:43:08.194089       9 log.go:172] (0xc0019f99a0) (3) Data frame sent
I0104 15:43:08.270975       9 log.go:172] (0xc005c9ab00) Data frame received for 1
I0104 15:43:08.271106       9 log.go:172] (0xc005c9ab00) (0xc0019f99a0) Stream removed, broadcasting: 3
I0104 15:43:08.271172       9 log.go:172] (0xc0019f9900) (1) Data frame handling
I0104 15:43:08.271187       9 log.go:172] (0xc0019f9900) (1) Data frame sent
I0104 15:43:08.271212       9 log.go:172] (0xc005c9ab00) (0xc00112a820) Stream removed, broadcasting: 5
I0104 15:43:08.271259       9 log.go:172] (0xc005c9ab00) (0xc0019f9900) Stream removed, broadcasting: 1
I0104 15:43:08.271285       9 log.go:172] (0xc005c9ab00) Go away received
I0104 15:43:08.271531       9 log.go:172] (0xc005c9ab00) (0xc0019f9900) Stream removed, broadcasting: 1
I0104 15:43:08.271551       9 log.go:172] (0xc005c9ab00) (0xc0019f99a0) Stream removed, broadcasting: 3
I0104 15:43:08.271564       9 log.go:172] (0xc005c9ab00) (0xc00112a820) Stream removed, broadcasting: 5
Jan  4 15:43:08.271: INFO: Exec stderr: ""
Jan  4 15:43:08.271: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6025 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 15:43:08.271: INFO: >>> kubeConfig: /root/.kube/config
I0104 15:43:08.329395       9 log.go:172] (0xc0027d3080) (0xc001267d60) Create stream
I0104 15:43:08.329571       9 log.go:172] (0xc0027d3080) (0xc001267d60) Stream added, broadcasting: 1
I0104 15:43:08.338256       9 log.go:172] (0xc0027d3080) Reply frame received for 1
I0104 15:43:08.338368       9 log.go:172] (0xc0027d3080) (0xc00181d9a0) Create stream
I0104 15:43:08.338382       9 log.go:172] (0xc0027d3080) (0xc00181d9a0) Stream added, broadcasting: 3
I0104 15:43:08.340432       9 log.go:172] (0xc0027d3080) Reply frame received for 3
I0104 15:43:08.340487       9 log.go:172] (0xc0027d3080) (0xc0019f9ae0) Create stream
I0104 15:43:08.340511       9 log.go:172] (0xc0027d3080) (0xc0019f9ae0) Stream added, broadcasting: 5
I0104 15:43:08.343003       9 log.go:172] (0xc0027d3080) Reply frame received for 5
I0104 15:43:08.417310       9 log.go:172] (0xc0027d3080) Data frame received for 3
I0104 15:43:08.417347       9 log.go:172] (0xc00181d9a0) (3) Data frame handling
I0104 15:43:08.417361       9 log.go:172] (0xc00181d9a0) (3) Data frame sent
I0104 15:43:08.491545       9 log.go:172] (0xc0027d3080) Data frame received for 1
I0104 15:43:08.491683       9 log.go:172] (0xc0027d3080) (0xc0019f9ae0) Stream removed, broadcasting: 5
I0104 15:43:08.491725       9 log.go:172] (0xc001267d60) (1) Data frame handling
I0104 15:43:08.491742       9 log.go:172] (0xc001267d60) (1) Data frame sent
I0104 15:43:08.491762       9 log.go:172] (0xc0027d3080) (0xc00181d9a0) Stream removed, broadcasting: 3
I0104 15:43:08.491777       9 log.go:172] (0xc0027d3080) (0xc001267d60) Stream removed, broadcasting: 1
I0104 15:43:08.491837       9 log.go:172] (0xc0027d3080) (0xc001267d60) Stream removed, broadcasting: 1
I0104 15:43:08.491844       9 log.go:172] (0xc0027d3080) (0xc00181d9a0) Stream removed, broadcasting: 3
I0104 15:43:08.491849       9 log.go:172] (0xc0027d3080) (0xc0019f9ae0) Stream removed, broadcasting: 5
Jan  4 15:43:08.492: INFO: Exec stderr: ""
Jan  4 15:43:08.492: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6025 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 15:43:08.492: INFO: >>> kubeConfig: /root/.kube/config
I0104 15:43:08.540510       9 log.go:172] (0xc005c9b130) (0xc0019f9e00) Create stream
I0104 15:43:08.540614       9 log.go:172] (0xc005c9b130) (0xc0019f9e00) Stream added, broadcasting: 1
I0104 15:43:08.552463       9 log.go:172] (0xc005c9b130) Reply frame received for 1
I0104 15:43:08.552567       9 log.go:172] (0xc005c9b130) (0xc0015ca0a0) Create stream
I0104 15:43:08.552574       9 log.go:172] (0xc005c9b130) (0xc0015ca0a0) Stream added, broadcasting: 3
I0104 15:43:08.554609       9 log.go:172] (0xc005c9b130) Reply frame received for 3
I0104 15:43:08.554664       9 log.go:172] (0xc005c9b130) (0xc0019f9f40) Create stream
I0104 15:43:08.554689       9 log.go:172] (0xc005c9b130) (0xc0019f9f40) Stream added, broadcasting: 5
I0104 15:43:08.556098       9 log.go:172] (0xc005c9b130) Reply frame received for 5
I0104 15:43:08.628314       9 log.go:172] (0xc005c9b130) Data frame received for 3
I0104 15:43:08.628360       9 log.go:172] (0xc0015ca0a0) (3) Data frame handling
I0104 15:43:08.628375       9 log.go:172] (0xc0015ca0a0) (3) Data frame sent
I0104 15:43:08.682923       9 log.go:172] (0xc005c9b130) (0xc0015ca0a0) Stream removed, broadcasting: 3
I0104 15:43:08.683022       9 log.go:172] (0xc005c9b130) Data frame received for 1
I0104 15:43:08.683059       9 log.go:172] (0xc0019f9e00) (1) Data frame handling
I0104 15:43:08.683084       9 log.go:172] (0xc0019f9e00) (1) Data frame sent
I0104 15:43:08.683103       9 log.go:172] (0xc005c9b130) (0xc0019f9e00) Stream removed, broadcasting: 1
I0104 15:43:08.683308       9 log.go:172] (0xc005c9b130) (0xc0019f9f40) Stream removed, broadcasting: 5
I0104 15:43:08.683348       9 log.go:172] (0xc005c9b130) (0xc0019f9e00) Stream removed, broadcasting: 1
I0104 15:43:08.683356       9 log.go:172] (0xc005c9b130) (0xc0015ca0a0) Stream removed, broadcasting: 3
I0104 15:43:08.683362       9 log.go:172] (0xc005c9b130) (0xc0019f9f40) Stream removed, broadcasting: 5
I0104 15:43:08.683375       9 log.go:172] (0xc005c9b130) Go away received
Jan  4 15:43:08.683: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:43:08.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-6025" for this suite.

• [SLOW TEST:22.180 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4089,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:43:08.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  4 15:43:08.818: INFO: Number of nodes with available pods: 0
Jan  4 15:43:08.818: INFO: Node jerma-node is running more than one daemon pod
Jan  4 15:43:09.831: INFO: Number of nodes with available pods: 0
Jan  4 15:43:09.831: INFO: Node jerma-node is running more than one daemon pod
Jan  4 15:43:11.178: INFO: Number of nodes with available pods: 0
Jan  4 15:43:11.178: INFO: Node jerma-node is running more than one daemon pod
Jan  4 15:43:11.865: INFO: Number of nodes with available pods: 0
Jan  4 15:43:11.865: INFO: Node jerma-node is running more than one daemon pod
Jan  4 15:43:12.833: INFO: Number of nodes with available pods: 0
Jan  4 15:43:12.833: INFO: Node jerma-node is running more than one daemon pod
Jan  4 15:43:14.408: INFO: Number of nodes with available pods: 0
Jan  4 15:43:14.408: INFO: Node jerma-node is running more than one daemon pod
Jan  4 15:43:15.030: INFO: Number of nodes with available pods: 0
Jan  4 15:43:15.030: INFO: Node jerma-node is running more than one daemon pod
Jan  4 15:43:15.885: INFO: Number of nodes with available pods: 0
Jan  4 15:43:15.885: INFO: Node jerma-node is running more than one daemon pod
Jan  4 15:43:16.832: INFO: Number of nodes with available pods: 1
Jan  4 15:43:16.832: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan  4 15:43:17.831: INFO: Number of nodes with available pods: 2
Jan  4 15:43:17.831: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan  4 15:43:18.069: INFO: Number of nodes with available pods: 1
Jan  4 15:43:18.069: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan  4 15:43:19.549: INFO: Number of nodes with available pods: 1
Jan  4 15:43:19.549: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan  4 15:43:20.100: INFO: Number of nodes with available pods: 1
Jan  4 15:43:20.100: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan  4 15:43:21.076: INFO: Number of nodes with available pods: 1
Jan  4 15:43:21.076: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan  4 15:43:22.223: INFO: Number of nodes with available pods: 1
Jan  4 15:43:22.223: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan  4 15:43:23.077: INFO: Number of nodes with available pods: 1
Jan  4 15:43:23.077: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan  4 15:43:24.075: INFO: Number of nodes with available pods: 1
Jan  4 15:43:24.075: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan  4 15:43:25.083: INFO: Number of nodes with available pods: 1
Jan  4 15:43:25.084: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan  4 15:43:26.080: INFO: Number of nodes with available pods: 1
Jan  4 15:43:26.080: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan  4 15:43:27.079: INFO: Number of nodes with available pods: 2
Jan  4 15:43:27.079: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1676, will wait for the garbage collector to delete the pods
Jan  4 15:43:27.147: INFO: Deleting DaemonSet.extensions daemon-set took: 9.577645ms
Jan  4 15:43:27.547: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.296131ms
Jan  4 15:43:43.150: INFO: Number of nodes with available pods: 0
Jan  4 15:43:43.150: INFO: Number of running nodes: 0, number of available pods: 0
Jan  4 15:43:43.153: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1676/daemonsets","resourceVersion":"49343"},"items":null}

Jan  4 15:43:43.155: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1676/pods","resourceVersion":"49343"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:43:43.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1676" for this suite.

• [SLOW TEST:34.516 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":257,"skipped":4190,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:43:43.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-08bfd71d-a837-4130-b3aa-1a79811fc7cd
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:43:59.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9262" for this suite.

• [SLOW TEST:16.322 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4194,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:43:59.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan  4 15:43:59.623: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd270095-98e3-44d0-bd62-8c07f9039d88" in namespace "downward-api-341" to be "success or failure"
Jan  4 15:43:59.640: INFO: Pod "downwardapi-volume-fd270095-98e3-44d0-bd62-8c07f9039d88": Phase="Pending", Reason="", readiness=false. Elapsed: 17.482355ms
Jan  4 15:44:01.695: INFO: Pod "downwardapi-volume-fd270095-98e3-44d0-bd62-8c07f9039d88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072023766s
Jan  4 15:44:03.700: INFO: Pod "downwardapi-volume-fd270095-98e3-44d0-bd62-8c07f9039d88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077309588s
Jan  4 15:44:06.230: INFO: Pod "downwardapi-volume-fd270095-98e3-44d0-bd62-8c07f9039d88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.606867901s
Jan  4 15:44:10.791: INFO: Pod "downwardapi-volume-fd270095-98e3-44d0-bd62-8c07f9039d88": Phase="Pending", Reason="", readiness=false. Elapsed: 11.168074641s
Jan  4 15:44:12.796: INFO: Pod "downwardapi-volume-fd270095-98e3-44d0-bd62-8c07f9039d88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.172747824s
STEP: Saw pod success
Jan  4 15:44:12.796: INFO: Pod "downwardapi-volume-fd270095-98e3-44d0-bd62-8c07f9039d88" satisfied condition "success or failure"
Jan  4 15:44:12.799: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-fd270095-98e3-44d0-bd62-8c07f9039d88 container client-container: 
STEP: delete the pod
Jan  4 15:44:12.943: INFO: Waiting for pod downwardapi-volume-fd270095-98e3-44d0-bd62-8c07f9039d88 to disappear
Jan  4 15:44:12.951: INFO: Pod downwardapi-volume-fd270095-98e3-44d0-bd62-8c07f9039d88 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:44:12.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-341" for this suite.

• [SLOW TEST:13.424 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4195,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:44:12.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:45:13.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3514" for this suite.

• [SLOW TEST:60.317 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":260,"skipped":4231,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:45:13.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-36aa2b7a-9002-40c8-a7f5-6571a85dfb33
STEP: Creating a pod to test consume configMaps
Jan  4 15:45:13.373: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-190f1e90-e332-46fd-8d8a-21a15224fce7" in namespace "projected-310" to be "success or failure"
Jan  4 15:45:13.412: INFO: Pod "pod-projected-configmaps-190f1e90-e332-46fd-8d8a-21a15224fce7": Phase="Pending", Reason="", readiness=false. Elapsed: 38.542217ms
Jan  4 15:45:15.427: INFO: Pod "pod-projected-configmaps-190f1e90-e332-46fd-8d8a-21a15224fce7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053287485s
Jan  4 15:45:17.431: INFO: Pod "pod-projected-configmaps-190f1e90-e332-46fd-8d8a-21a15224fce7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057512464s
Jan  4 15:45:19.436: INFO: Pod "pod-projected-configmaps-190f1e90-e332-46fd-8d8a-21a15224fce7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062406346s
Jan  4 15:45:21.440: INFO: Pod "pod-projected-configmaps-190f1e90-e332-46fd-8d8a-21a15224fce7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066646692s
Jan  4 15:45:23.446: INFO: Pod "pod-projected-configmaps-190f1e90-e332-46fd-8d8a-21a15224fce7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072186814s
STEP: Saw pod success
Jan  4 15:45:23.446: INFO: Pod "pod-projected-configmaps-190f1e90-e332-46fd-8d8a-21a15224fce7" satisfied condition "success or failure"
Jan  4 15:45:23.449: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-190f1e90-e332-46fd-8d8a-21a15224fce7 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  4 15:45:23.568: INFO: Waiting for pod pod-projected-configmaps-190f1e90-e332-46fd-8d8a-21a15224fce7 to disappear
Jan  4 15:45:23.574: INFO: Pod pod-projected-configmaps-190f1e90-e332-46fd-8d8a-21a15224fce7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:45:23.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-310" for this suite.

• [SLOW TEST:10.310 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4269,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:45:23.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  4 15:45:47.731: INFO: Container started at 2020-01-04 15:45:27 +0000 UTC, pod became ready at 2020-01-04 15:45:47 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:45:47.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6751" for this suite.

• [SLOW TEST:24.151 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4288,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:45:47.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Jan  4 15:45:47.812: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix745098131/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:45:47.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7775" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":263,"skipped":4299,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:45:47.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan  4 15:45:48.007: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4480aad4-3648-40d4-b816-1d2d51e69e16" in namespace "projected-9318" to be "success or failure"
Jan  4 15:45:48.121: INFO: Pod "downwardapi-volume-4480aad4-3648-40d4-b816-1d2d51e69e16": Phase="Pending", Reason="", readiness=false. Elapsed: 113.476356ms
Jan  4 15:45:50.126: INFO: Pod "downwardapi-volume-4480aad4-3648-40d4-b816-1d2d51e69e16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118124111s
Jan  4 15:45:52.133: INFO: Pod "downwardapi-volume-4480aad4-3648-40d4-b816-1d2d51e69e16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12544162s
Jan  4 15:45:54.136: INFO: Pod "downwardapi-volume-4480aad4-3648-40d4-b816-1d2d51e69e16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.129071976s
STEP: Saw pod success
Jan  4 15:45:54.137: INFO: Pod "downwardapi-volume-4480aad4-3648-40d4-b816-1d2d51e69e16" satisfied condition "success or failure"
Jan  4 15:45:54.140: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-4480aad4-3648-40d4-b816-1d2d51e69e16 container client-container: 
STEP: delete the pod
Jan  4 15:45:54.168: INFO: Waiting for pod downwardapi-volume-4480aad4-3648-40d4-b816-1d2d51e69e16 to disappear
Jan  4 15:45:54.178: INFO: Pod downwardapi-volume-4480aad4-3648-40d4-b816-1d2d51e69e16 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:45:54.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9318" for this suite.

• [SLOW TEST:6.333 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4316,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:45:54.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan  4 15:45:55.014: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan  4 15:45:57.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749555, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749555, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749555, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749554, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 15:45:59.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749555, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749555, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749555, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749554, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 15:46:01.036: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749555, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749555, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749555, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749554, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 15:46:03.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749555, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749555, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749555, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749554, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan  4 15:46:06.168: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:46:16.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9726" for this suite.
STEP: Destroying namespace "webhook-9726-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:22.253 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":265,"skipped":4329,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:46:16.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-7647eafd-38e0-44c9-8720-c0abc3b79c1d
STEP: Creating a pod to test consume configMaps
Jan  4 15:46:16.632: INFO: Waiting up to 5m0s for pod "pod-configmaps-530079ef-9df5-414d-b6fa-69a8145c6edb" in namespace "configmap-9214" to be "success or failure"
Jan  4 15:46:16.783: INFO: Pod "pod-configmaps-530079ef-9df5-414d-b6fa-69a8145c6edb": Phase="Pending", Reason="", readiness=false. Elapsed: 151.109831ms
Jan  4 15:46:18.788: INFO: Pod "pod-configmaps-530079ef-9df5-414d-b6fa-69a8145c6edb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155911876s
Jan  4 15:46:20.804: INFO: Pod "pod-configmaps-530079ef-9df5-414d-b6fa-69a8145c6edb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171458279s
Jan  4 15:46:22.808: INFO: Pod "pod-configmaps-530079ef-9df5-414d-b6fa-69a8145c6edb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.175817107s
Jan  4 15:46:24.813: INFO: Pod "pod-configmaps-530079ef-9df5-414d-b6fa-69a8145c6edb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.180326306s
Jan  4 15:46:26.841: INFO: Pod "pod-configmaps-530079ef-9df5-414d-b6fa-69a8145c6edb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.208256508s
STEP: Saw pod success
Jan  4 15:46:26.841: INFO: Pod "pod-configmaps-530079ef-9df5-414d-b6fa-69a8145c6edb" satisfied condition "success or failure"
Jan  4 15:46:26.845: INFO: Trying to get logs from node jerma-node pod pod-configmaps-530079ef-9df5-414d-b6fa-69a8145c6edb container configmap-volume-test: 
STEP: delete the pod
Jan  4 15:46:26.891: INFO: Waiting for pod pod-configmaps-530079ef-9df5-414d-b6fa-69a8145c6edb to disappear
Jan  4 15:46:26.900: INFO: Pod pod-configmaps-530079ef-9df5-414d-b6fa-69a8145c6edb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:46:26.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9214" for this suite.

• [SLOW TEST:10.394 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4332,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:46:26.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  4 15:46:27.426: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-0e7d8f4e-6959-4fdb-a04d-d61153df33b2" in namespace "security-context-test-7018" to be "success or failure"
Jan  4 15:46:27.448: INFO: Pod "busybox-privileged-false-0e7d8f4e-6959-4fdb-a04d-d61153df33b2": Phase="Pending", Reason="", readiness=false. Elapsed: 21.946861ms
Jan  4 15:46:29.452: INFO: Pod "busybox-privileged-false-0e7d8f4e-6959-4fdb-a04d-d61153df33b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025851871s
Jan  4 15:46:31.460: INFO: Pod "busybox-privileged-false-0e7d8f4e-6959-4fdb-a04d-d61153df33b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033408966s
Jan  4 15:46:33.465: INFO: Pod "busybox-privileged-false-0e7d8f4e-6959-4fdb-a04d-d61153df33b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03816649s
Jan  4 15:46:33.465: INFO: Pod "busybox-privileged-false-0e7d8f4e-6959-4fdb-a04d-d61153df33b2" satisfied condition "success or failure"
Jan  4 15:46:33.475: INFO: Got logs for pod "busybox-privileged-false-0e7d8f4e-6959-4fdb-a04d-d61153df33b2": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:46:33.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7018" for this suite.

• [SLOW TEST:6.582 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4362,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:46:33.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:46:45.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8947" for this suite.

• [SLOW TEST:12.279 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4374,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:46:45.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan  4 15:46:45.865: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff8387dc-d26b-481a-ba3e-f44c522c5862" in namespace "projected-5926" to be "success or failure"
Jan  4 15:46:45.879: INFO: Pod "downwardapi-volume-ff8387dc-d26b-481a-ba3e-f44c522c5862": Phase="Pending", Reason="", readiness=false. Elapsed: 13.19017ms
Jan  4 15:46:47.883: INFO: Pod "downwardapi-volume-ff8387dc-d26b-481a-ba3e-f44c522c5862": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01798806s
Jan  4 15:46:49.905: INFO: Pod "downwardapi-volume-ff8387dc-d26b-481a-ba3e-f44c522c5862": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039504285s
Jan  4 15:46:51.909: INFO: Pod "downwardapi-volume-ff8387dc-d26b-481a-ba3e-f44c522c5862": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043654651s
Jan  4 15:46:53.917: INFO: Pod "downwardapi-volume-ff8387dc-d26b-481a-ba3e-f44c522c5862": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051090232s
STEP: Saw pod success
Jan  4 15:46:53.917: INFO: Pod "downwardapi-volume-ff8387dc-d26b-481a-ba3e-f44c522c5862" satisfied condition "success or failure"
Jan  4 15:46:53.924: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ff8387dc-d26b-481a-ba3e-f44c522c5862 container client-container: 
STEP: delete the pod
Jan  4 15:46:54.086: INFO: Waiting for pod downwardapi-volume-ff8387dc-d26b-481a-ba3e-f44c522c5862 to disappear
Jan  4 15:46:54.092: INFO: Pod downwardapi-volume-ff8387dc-d26b-481a-ba3e-f44c522c5862 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:46:54.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5926" for this suite.

• [SLOW TEST:8.328 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4388,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:46:54.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  4 15:46:54.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan  4 15:46:57.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1392 create -f -'
Jan  4 15:46:59.869: INFO: stderr: ""
Jan  4 15:46:59.869: INFO: stdout: "e2e-test-crd-publish-openapi-2888-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jan  4 15:46:59.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1392 delete e2e-test-crd-publish-openapi-2888-crds test-cr'
Jan  4 15:47:00.093: INFO: stderr: ""
Jan  4 15:47:00.093: INFO: stdout: "e2e-test-crd-publish-openapi-2888-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Jan  4 15:47:00.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1392 apply -f -'
Jan  4 15:47:00.448: INFO: stderr: ""
Jan  4 15:47:00.448: INFO: stdout: "e2e-test-crd-publish-openapi-2888-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jan  4 15:47:00.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1392 delete e2e-test-crd-publish-openapi-2888-crds test-cr'
Jan  4 15:47:00.621: INFO: stderr: ""
Jan  4 15:47:00.621: INFO: stdout: "e2e-test-crd-publish-openapi-2888-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jan  4 15:47:00.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2888-crds'
Jan  4 15:47:00.899: INFO: stderr: ""
Jan  4 15:47:00.899: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2888-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:47:02.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1392" for this suite.

• [SLOW TEST:8.663 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":270,"skipped":4393,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:47:02.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  4 15:47:02.872: INFO: Waiting up to 5m0s for pod "pod-0797ce8e-8e4d-4dc8-9654-a305970e603c" in namespace "emptydir-1476" to be "success or failure"
Jan  4 15:47:02.882: INFO: Pod "pod-0797ce8e-8e4d-4dc8-9654-a305970e603c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.534581ms
Jan  4 15:47:04.887: INFO: Pod "pod-0797ce8e-8e4d-4dc8-9654-a305970e603c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015723276s
Jan  4 15:47:06.893: INFO: Pod "pod-0797ce8e-8e4d-4dc8-9654-a305970e603c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021560703s
Jan  4 15:47:08.904: INFO: Pod "pod-0797ce8e-8e4d-4dc8-9654-a305970e603c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032526993s
Jan  4 15:47:10.909: INFO: Pod "pod-0797ce8e-8e4d-4dc8-9654-a305970e603c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037599222s
STEP: Saw pod success
Jan  4 15:47:10.909: INFO: Pod "pod-0797ce8e-8e4d-4dc8-9654-a305970e603c" satisfied condition "success or failure"
Jan  4 15:47:10.912: INFO: Trying to get logs from node jerma-node pod pod-0797ce8e-8e4d-4dc8-9654-a305970e603c container test-container: 
STEP: delete the pod
Jan  4 15:47:11.403: INFO: Waiting for pod pod-0797ce8e-8e4d-4dc8-9654-a305970e603c to disappear
Jan  4 15:47:11.408: INFO: Pod pod-0797ce8e-8e4d-4dc8-9654-a305970e603c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:47:11.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1476" for this suite.

• [SLOW TEST:8.662 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4398,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:47:11.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  4 15:47:11.643: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
apt/
auth.log
btmp
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-3889
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-3889
Jan  4 15:47:12.283: INFO: Found 0 stateful pods, waiting for 1
Jan  4 15:47:22.286: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan  4 15:47:22.319: INFO: Deleting all statefulset in ns statefulset-3889
Jan  4 15:47:22.325: INFO: Scaling statefulset ss to 0
Jan  4 15:47:32.466: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 15:47:32.471: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:47:32.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3889" for this suite.

• [SLOW TEST:20.671 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":273,"skipped":4447,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:47:32.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-b578a6f7-4e4d-41ad-8c1e-c8b8198ed4a0
STEP: Creating secret with name secret-projected-all-test-volume-854582a7-ce5e-4ab8-be45-d7b081a5b2bf
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan  4 15:47:32.634: INFO: Waiting up to 5m0s for pod "projected-volume-5368fd45-4fbf-4db4-8db3-1caa4281e33b" in namespace "projected-8532" to be "success or failure"
Jan  4 15:47:32.643: INFO: Pod "projected-volume-5368fd45-4fbf-4db4-8db3-1caa4281e33b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104626ms
Jan  4 15:47:34.649: INFO: Pod "projected-volume-5368fd45-4fbf-4db4-8db3-1caa4281e33b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014125818s
Jan  4 15:47:36.653: INFO: Pod "projected-volume-5368fd45-4fbf-4db4-8db3-1caa4281e33b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018952294s
Jan  4 15:47:38.658: INFO: Pod "projected-volume-5368fd45-4fbf-4db4-8db3-1caa4281e33b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023902809s
STEP: Saw pod success
Jan  4 15:47:38.658: INFO: Pod "projected-volume-5368fd45-4fbf-4db4-8db3-1caa4281e33b" satisfied condition "success or failure"
Jan  4 15:47:38.672: INFO: Trying to get logs from node jerma-node pod projected-volume-5368fd45-4fbf-4db4-8db3-1caa4281e33b container projected-all-volume-test: 
STEP: delete the pod
Jan  4 15:47:38.716: INFO: Waiting for pod projected-volume-5368fd45-4fbf-4db4-8db3-1caa4281e33b to disappear
Jan  4 15:47:38.757: INFO: Pod projected-volume-5368fd45-4fbf-4db4-8db3-1caa4281e33b no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:47:38.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8532" for this suite.

• [SLOW TEST:6.208 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4464,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:47:38.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan  4 15:47:48.224: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:47:48.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1692" for this suite.

• [SLOW TEST:9.562 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":275,"skipped":4486,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:47:48.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1713
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan  4 15:47:48.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-6302'
Jan  4 15:47:48.617: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  4 15:47:48.617: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1718
Jan  4 15:47:50.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-6302'
Jan  4 15:47:50.786: INFO: stderr: ""
Jan  4 15:47:50.786: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:47:50.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6302" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":278,"completed":276,"skipped":4495,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  4 15:47:50.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan  4 15:47:50.946: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85f44905-5667-4c88-844b-20affa18c25d" in namespace "projected-7285" to be "success or failure"
Jan  4 15:47:51.040: INFO: Pod "downwardapi-volume-85f44905-5667-4c88-844b-20affa18c25d": Phase="Pending", Reason="", readiness=false. Elapsed: 94.596719ms
Jan  4 15:47:53.045: INFO: Pod "downwardapi-volume-85f44905-5667-4c88-844b-20affa18c25d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099402587s
Jan  4 15:47:55.050: INFO: Pod "downwardapi-volume-85f44905-5667-4c88-844b-20affa18c25d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104098734s
Jan  4 15:47:57.054: INFO: Pod "downwardapi-volume-85f44905-5667-4c88-844b-20affa18c25d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10868097s
Jan  4 15:47:59.068: INFO: Pod "downwardapi-volume-85f44905-5667-4c88-844b-20affa18c25d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121844263s
Jan  4 15:48:01.123: INFO: Pod "downwardapi-volume-85f44905-5667-4c88-844b-20affa18c25d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.17731556s
STEP: Saw pod success
Jan  4 15:48:01.123: INFO: Pod "downwardapi-volume-85f44905-5667-4c88-844b-20affa18c25d" satisfied condition "success or failure"
Jan  4 15:48:01.126: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-85f44905-5667-4c88-844b-20affa18c25d container client-container: 
STEP: delete the pod
Jan  4 15:48:01.199: INFO: Waiting for pod downwardapi-volume-85f44905-5667-4c88-844b-20affa18c25d to disappear
Jan  4 15:48:01.208: INFO: Pod downwardapi-volume-85f44905-5667-4c88-844b-20affa18c25d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  4 15:48:01.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7285" for this suite.

• [SLOW TEST:10.428 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4515,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSJan  4 15:48:01.287: INFO: Running AfterSuite actions on all nodes
Jan  4 15:48:01.287: INFO: Running AfterSuite actions on node 1
Jan  4 15:48:01.287: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":277,"skipped":4536,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-cli] Kubectl client Guestbook application [It] should create and stop a working application  [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315

Ran 278 of 4814 Specs in 7417.000 seconds
FAIL! -- 277 Passed | 1 Failed | 0 Pending | 4536 Skipped
--- FAIL: TestE2E (7417.09s)
FAIL