I0214 23:39:16.766589 10 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0214 23:39:16.767109 10 e2e.go:109] Starting e2e run "7274ae64-4b01-48a3-8283-13f646657458" on Ginkgo node 1 {"msg":"Test Suite starting","total":280,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581723554 - Will randomize all specs Will run 280 of 4845 specs Feb 14 23:39:16.848: INFO: >>> kubeConfig: /root/.kube/config Feb 14 23:39:16.853: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 14 23:39:16.882: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 14 23:39:16.921: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 14 23:39:16.921: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 14 23:39:16.921: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 14 23:39:16.934: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 14 23:39:16.934: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 14 23:39:16.934: INFO: e2e test version: v1.18.0-alpha.2.152+426b3538900329 Feb 14 23:39:16.935: INFO: kube-apiserver version: v1.17.0 Feb 14 23:39:16.936: INFO: >>> kubeConfig: /root/.kube/config Feb 14 23:39:16.942: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:39:16.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Feb 14 23:39:17.056: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:39:59.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8648" for this suite. • [SLOW TEST:42.140 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":280,"completed":1,"skipped":6,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:39:59.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-map-d4b4456f-29a1-475c-a20a-dff1fbad2c31 STEP: Creating a pod to test consume secrets Feb 14 23:39:59.268: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-29270604-5c1e-4144-92f1-f3acaa0ce43e" in namespace "projected-4869" to be "success or failure" Feb 14 23:39:59.298: INFO: Pod "pod-projected-secrets-29270604-5c1e-4144-92f1-f3acaa0ce43e": Phase="Pending", Reason="", readiness=false. Elapsed: 30.260244ms Feb 14 23:40:01.306: INFO: Pod "pod-projected-secrets-29270604-5c1e-4144-92f1-f3acaa0ce43e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037946877s Feb 14 23:40:03.313: INFO: Pod "pod-projected-secrets-29270604-5c1e-4144-92f1-f3acaa0ce43e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045299962s Feb 14 23:40:05.909: INFO: Pod "pod-projected-secrets-29270604-5c1e-4144-92f1-f3acaa0ce43e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.641162773s Feb 14 23:40:07.924: INFO: Pod "pod-projected-secrets-29270604-5c1e-4144-92f1-f3acaa0ce43e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.656102488s Feb 14 23:40:10.007: INFO: Pod "pod-projected-secrets-29270604-5c1e-4144-92f1-f3acaa0ce43e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.738483263s STEP: Saw pod success Feb 14 23:40:10.007: INFO: Pod "pod-projected-secrets-29270604-5c1e-4144-92f1-f3acaa0ce43e" satisfied condition "success or failure" Feb 14 23:40:10.011: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-29270604-5c1e-4144-92f1-f3acaa0ce43e container projected-secret-volume-test: STEP: delete the pod Feb 14 23:40:10.210: INFO: Waiting for pod pod-projected-secrets-29270604-5c1e-4144-92f1-f3acaa0ce43e to disappear Feb 14 23:40:10.219: INFO: Pod pod-projected-secrets-29270604-5c1e-4144-92f1-f3acaa0ce43e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:40:10.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4869" for this suite. • [SLOW TEST:11.145 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":2,"skipped":12,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:40:10.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0214 23:40:14.726031 10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 14 23:40:14.726: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:40:14.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1697" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":280,"completed":3,"skipped":20,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:40:14.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service multi-endpoint-test in namespace services-4124 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4124 to expose endpoints map[] Feb 14 23:40:15.289: INFO: Get endpoints failed (13.223581ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Feb 14 23:40:16.329: INFO: successfully validated that service multi-endpoint-test in namespace services-4124 exposes endpoints map[] (1.053166235s elapsed) STEP: Creating pod pod1 in namespace services-4124 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4124 to expose endpoints map[pod1:[100]] Feb 14 23:40:20.913: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.568349243s elapsed, will retry) Feb 14 23:40:24.959: INFO: successfully validated that service multi-endpoint-test in namespace services-4124 exposes endpoints map[pod1:[100]] (8.614727463s elapsed) STEP: Creating pod pod2 in namespace services-4124 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4124 to expose endpoints map[pod1:[100] pod2:[101]] Feb 14 23:40:29.599: INFO: Unexpected endpoints: found map[caa879a2-7457-447c-80ef-a4562853af2e:[100]], expected map[pod1:[100] pod2:[101]] (4.635613667s elapsed, will retry) Feb 14 23:40:32.652: INFO: successfully validated that service multi-endpoint-test in namespace services-4124 exposes endpoints map[pod1:[100] pod2:[101]] (7.688254488s elapsed) STEP: Deleting pod pod1 in namespace services-4124 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4124 to expose endpoints map[pod2:[101]] Feb 14 23:40:32.709: INFO: successfully validated that service multi-endpoint-test in namespace services-4124 exposes endpoints map[pod2:[101]] (43.12653ms elapsed) STEP: Deleting pod pod2 in namespace services-4124 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4124 to expose endpoints map[] Feb 14 23:40:33.780: INFO: successfully validated that service multi-endpoint-test in namespace services-4124 exposes endpoints map[] (1.028343614s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:40:33.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4124" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:19.193 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":280,"completed":4,"skipped":27,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:40:33.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test override command Feb 14 23:40:34.111: INFO: Waiting up to 5m0s for pod "client-containers-ef0c6840-18ea-4348-9e9c-388dd692a86b" in namespace "containers-1974" to be "success or failure" Feb 14 23:40:34.131: INFO: Pod "client-containers-ef0c6840-18ea-4348-9e9c-388dd692a86b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.762021ms Feb 14 23:40:36.142: INFO: Pod "client-containers-ef0c6840-18ea-4348-9e9c-388dd692a86b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030446751s Feb 14 23:40:38.150: INFO: Pod "client-containers-ef0c6840-18ea-4348-9e9c-388dd692a86b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038251091s Feb 14 23:40:40.157: INFO: Pod "client-containers-ef0c6840-18ea-4348-9e9c-388dd692a86b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045653261s Feb 14 23:40:42.172: INFO: Pod "client-containers-ef0c6840-18ea-4348-9e9c-388dd692a86b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060244824s Feb 14 23:40:44.179: INFO: Pod "client-containers-ef0c6840-18ea-4348-9e9c-388dd692a86b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067887375s STEP: Saw pod success Feb 14 23:40:44.179: INFO: Pod "client-containers-ef0c6840-18ea-4348-9e9c-388dd692a86b" satisfied condition "success or failure" Feb 14 23:40:44.183: INFO: Trying to get logs from node jerma-node pod client-containers-ef0c6840-18ea-4348-9e9c-388dd692a86b container test-container: STEP: delete the pod Feb 14 23:40:44.218: INFO: Waiting for pod client-containers-ef0c6840-18ea-4348-9e9c-388dd692a86b to disappear Feb 14 23:40:44.222: INFO: Pod client-containers-ef0c6840-18ea-4348-9e9c-388dd692a86b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:40:44.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1974" for this suite. • [SLOW TEST:10.347 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":280,"completed":5,"skipped":36,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:40:44.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name projected-secret-test-0d1202a3-6ad1-4a8d-9740-c4adc6ad1b3c STEP: Creating a pod to test consume secrets Feb 14 23:40:44.421: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f5bee6f9-53f8-4f4b-9203-adc9246242ea" in namespace "projected-5896" to be "success or failure" Feb 14 23:40:44.434: INFO: Pod "pod-projected-secrets-f5bee6f9-53f8-4f4b-9203-adc9246242ea": Phase="Pending", Reason="", readiness=false. Elapsed: 13.541792ms Feb 14 23:40:46.444: INFO: Pod "pod-projected-secrets-f5bee6f9-53f8-4f4b-9203-adc9246242ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023277077s Feb 14 23:40:48.454: INFO: Pod "pod-projected-secrets-f5bee6f9-53f8-4f4b-9203-adc9246242ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032852355s Feb 14 23:40:50.479: INFO: Pod "pod-projected-secrets-f5bee6f9-53f8-4f4b-9203-adc9246242ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058197286s Feb 14 23:40:52.487: INFO: Pod "pod-projected-secrets-f5bee6f9-53f8-4f4b-9203-adc9246242ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066434181s STEP: Saw pod success Feb 14 23:40:52.487: INFO: Pod "pod-projected-secrets-f5bee6f9-53f8-4f4b-9203-adc9246242ea" satisfied condition "success or failure" Feb 14 23:40:52.491: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-f5bee6f9-53f8-4f4b-9203-adc9246242ea container secret-volume-test: STEP: delete the pod Feb 14 23:40:52.565: INFO: Waiting for pod pod-projected-secrets-f5bee6f9-53f8-4f4b-9203-adc9246242ea to disappear Feb 14 23:40:52.573: INFO: Pod pod-projected-secrets-f5bee6f9-53f8-4f4b-9203-adc9246242ea no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:40:52.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5896" for this suite. • [SLOW TEST:8.349 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":6,"skipped":53,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:40:52.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with configMap that has name projected-configmap-test-upd-e46c0386-9d45-40c4-bd00-5c1a08045cef STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-e46c0386-9d45-40c4-bd00-5c1a08045cef STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:42:07.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8452" for this suite. • [SLOW TEST:75.058 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":7,"skipped":102,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:42:07.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 14 23:42:07.843: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b8d8645-bd0a-4c6d-a91d-370a8e8d1a09" in namespace "projected-7383" to be "success or failure" Feb 14 23:42:07.902: INFO: Pod "downwardapi-volume-8b8d8645-bd0a-4c6d-a91d-370a8e8d1a09": Phase="Pending", Reason="", readiness=false. Elapsed: 58.839456ms Feb 14 23:42:10.023: INFO: Pod "downwardapi-volume-8b8d8645-bd0a-4c6d-a91d-370a8e8d1a09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179877738s Feb 14 23:42:12.033: INFO: Pod "downwardapi-volume-8b8d8645-bd0a-4c6d-a91d-370a8e8d1a09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190131918s Feb 14 23:42:14.046: INFO: Pod "downwardapi-volume-8b8d8645-bd0a-4c6d-a91d-370a8e8d1a09": Phase="Pending", Reason="", readiness=false. Elapsed: 6.202615282s Feb 14 23:42:16.058: INFO: Pod "downwardapi-volume-8b8d8645-bd0a-4c6d-a91d-370a8e8d1a09": Phase="Pending", Reason="", readiness=false. Elapsed: 8.214946843s Feb 14 23:42:18.066: INFO: Pod "downwardapi-volume-8b8d8645-bd0a-4c6d-a91d-370a8e8d1a09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.223034299s STEP: Saw pod success Feb 14 23:42:18.067: INFO: Pod "downwardapi-volume-8b8d8645-bd0a-4c6d-a91d-370a8e8d1a09" satisfied condition "success or failure" Feb 14 23:42:18.070: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-8b8d8645-bd0a-4c6d-a91d-370a8e8d1a09 container client-container: STEP: delete the pod Feb 14 23:42:18.473: INFO: Waiting for pod downwardapi-volume-8b8d8645-bd0a-4c6d-a91d-370a8e8d1a09 to disappear Feb 14 23:42:18.503: INFO: Pod downwardapi-volume-8b8d8645-bd0a-4c6d-a91d-370a8e8d1a09 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:42:18.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7383" for this suite. • [SLOW TEST:10.889 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":8,"skipped":120,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:42:18.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Feb 14 23:42:18.783: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8440 /api/v1/namespaces/watch-8440/configmaps/e2e-watch-test-configmap-a c7c9698f-a1c6-4449-9362-64465fb711b4 8471700 0 2020-02-14 23:42:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 14 23:42:18.784: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8440 /api/v1/namespaces/watch-8440/configmaps/e2e-watch-test-configmap-a c7c9698f-a1c6-4449-9362-64465fb711b4 8471700 0 2020-02-14 23:42:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Feb 14 23:42:28.807: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8440 /api/v1/namespaces/watch-8440/configmaps/e2e-watch-test-configmap-a c7c9698f-a1c6-4449-9362-64465fb711b4 8471735 0 2020-02-14 23:42:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 14 23:42:28.808: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8440 /api/v1/namespaces/watch-8440/configmaps/e2e-watch-test-configmap-a c7c9698f-a1c6-4449-9362-64465fb711b4 8471735 0 2020-02-14 23:42:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Feb 14 23:42:38.830: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8440 /api/v1/namespaces/watch-8440/configmaps/e2e-watch-test-configmap-a c7c9698f-a1c6-4449-9362-64465fb711b4 8471759 0 2020-02-14 23:42:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 14 23:42:38.831: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8440 /api/v1/namespaces/watch-8440/configmaps/e2e-watch-test-configmap-a c7c9698f-a1c6-4449-9362-64465fb711b4 8471759 0 2020-02-14 23:42:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Feb 14 23:42:48.843: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8440 /api/v1/namespaces/watch-8440/configmaps/e2e-watch-test-configmap-a c7c9698f-a1c6-4449-9362-64465fb711b4 8471783 0 2020-02-14 23:42:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 14 23:42:48.843: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8440 /api/v1/namespaces/watch-8440/configmaps/e2e-watch-test-configmap-a c7c9698f-a1c6-4449-9362-64465fb711b4 8471783 0 2020-02-14 23:42:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Feb 14 23:42:58.859: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8440 /api/v1/namespaces/watch-8440/configmaps/e2e-watch-test-configmap-b ee2e313d-dabb-487e-b0b7-042fccba03fa 8471807 0 2020-02-14 23:42:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 14 23:42:58.860: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8440 /api/v1/namespaces/watch-8440/configmaps/e2e-watch-test-configmap-b ee2e313d-dabb-487e-b0b7-042fccba03fa 8471807 0 2020-02-14 23:42:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Feb 14 23:43:08.873: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8440 /api/v1/namespaces/watch-8440/configmaps/e2e-watch-test-configmap-b ee2e313d-dabb-487e-b0b7-042fccba03fa 8471831 0 2020-02-14 23:42:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 14 23:43:08.873: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8440 /api/v1/namespaces/watch-8440/configmaps/e2e-watch-test-configmap-b ee2e313d-dabb-487e-b0b7-042fccba03fa 8471831 0 2020-02-14 23:42:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:43:18.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8440" for this suite. • [SLOW TEST:60.317 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":280,"completed":9,"skipped":166,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:43:18.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-6a724621-5776-4de3-a51c-c2400584304e STEP: Creating a pod to test consume secrets Feb 14 23:43:19.035: INFO: Waiting up to 5m0s for pod "pod-secrets-7e4b7c4f-139e-4fe2-82f7-6e1c038ed2ea" in namespace "secrets-2598" to be "success or failure" Feb 14 23:43:19.065: INFO: Pod "pod-secrets-7e4b7c4f-139e-4fe2-82f7-6e1c038ed2ea": Phase="Pending", Reason="", readiness=false. Elapsed: 30.173194ms Feb 14 23:43:21.075: INFO: Pod "pod-secrets-7e4b7c4f-139e-4fe2-82f7-6e1c038ed2ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039505813s Feb 14 23:43:23.082: INFO: Pod "pod-secrets-7e4b7c4f-139e-4fe2-82f7-6e1c038ed2ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047013725s Feb 14 23:43:25.089: INFO: Pod "pod-secrets-7e4b7c4f-139e-4fe2-82f7-6e1c038ed2ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053615167s STEP: Saw pod success Feb 14 23:43:25.089: INFO: Pod "pod-secrets-7e4b7c4f-139e-4fe2-82f7-6e1c038ed2ea" satisfied condition "success or failure" Feb 14 23:43:25.095: INFO: Trying to get logs from node jerma-node pod pod-secrets-7e4b7c4f-139e-4fe2-82f7-6e1c038ed2ea container secret-volume-test: STEP: delete the pod Feb 14 23:43:25.146: INFO: Waiting for pod pod-secrets-7e4b7c4f-139e-4fe2-82f7-6e1c038ed2ea to disappear Feb 14 23:43:25.164: INFO: Pod pod-secrets-7e4b7c4f-139e-4fe2-82f7-6e1c038ed2ea no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:43:25.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2598" for this suite. • [SLOW TEST:6.309 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":10,"skipped":176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:43:25.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 14 23:43:26.098: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 14 23:43:28.110: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320606, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320606, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320606, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320605, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 23:43:30.116: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320606, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320606, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320606, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320605, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 23:43:32.120: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320606, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320606, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320606, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320605, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 14 23:43:35.260: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:43:47.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-434" for this suite. STEP: Destroying namespace "webhook-434-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:22.615 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":280,"completed":11,"skipped":214,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:43:47.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-3042 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 14 23:43:47.937: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 14 23:43:48.042: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 14 23:43:50.101: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 14 23:43:52.050: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 14 23:43:54.910: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 14 23:43:56.323: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 14 23:43:58.052: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 14 23:44:00.049: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 14 23:44:02.049: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 14 23:44:04.052: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 14 23:44:06.052: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 14 23:44:08.052: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 14 23:44:10.051: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 14 23:44:10.060: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 14 23:44:12.071: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 14 23:44:14.067: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 14 23:44:16.065: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 14 23:44:18.342: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 14 23:44:26.415: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-3042 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 14 23:44:26.416: INFO: >>> kubeConfig: /root/.kube/config I0214 23:44:26.486387 10 log.go:172] (0xc001b78210) (0xc002e0eaa0) Create stream I0214 23:44:26.486674 10 log.go:172] (0xc001b78210) (0xc002e0eaa0) Stream added, broadcasting: 1 I0214 23:44:26.493543 10 log.go:172] (0xc001b78210) Reply frame received for 1 I0214 23:44:26.493634 10 log.go:172] (0xc001b78210) (0xc002d4eaa0) Create stream I0214 23:44:26.493662 10 log.go:172] (0xc001b78210) (0xc002d4eaa0) Stream added, broadcasting: 3 I0214 23:44:26.498251 10 log.go:172] (0xc001b78210) Reply frame received for 3 I0214 23:44:26.498289 10 log.go:172] (0xc001b78210) (0xc002b72960) Create stream I0214 23:44:26.498299 10 log.go:172] (0xc001b78210) (0xc002b72960) Stream added, broadcasting: 5 I0214 23:44:26.500198 10 log.go:172] (0xc001b78210) Reply frame received for 5 I0214 23:44:26.654967 10 log.go:172] (0xc001b78210) Data frame received for 3 I0214 23:44:26.655390 10 log.go:172] (0xc002d4eaa0) (3) Data frame handling I0214 23:44:26.655510 10 log.go:172] (0xc002d4eaa0) (3) Data frame sent I0214 23:44:26.772425 10 log.go:172] (0xc001b78210) Data frame received for 1 I0214 23:44:26.772585 10 log.go:172] (0xc001b78210) (0xc002b72960) Stream removed, broadcasting: 5 I0214 23:44:26.772652 10 log.go:172] (0xc002e0eaa0) (1) Data frame handling I0214 23:44:26.772689 10 log.go:172] (0xc002e0eaa0) (1) Data frame sent I0214 23:44:26.772927 10 log.go:172] (0xc001b78210) (0xc002d4eaa0) Stream removed, broadcasting: 3 I0214 23:44:26.773173 10 log.go:172] (0xc001b78210) (0xc002e0eaa0) Stream removed, broadcasting: 1 I0214 23:44:26.773218 10 log.go:172] (0xc001b78210) Go away received I0214 23:44:26.775303 10 log.go:172] (0xc001b78210) (0xc002e0eaa0) Stream removed, broadcasting: 1 I0214 23:44:26.775333 10 log.go:172] (0xc001b78210) (0xc002d4eaa0) Stream removed, broadcasting: 3 I0214 23:44:26.775348 10 log.go:172] (0xc001b78210) (0xc002b72960) Stream removed, broadcasting: 5 Feb 14 23:44:26.775: INFO: Waiting for responses: map[] Feb 14 23:44:26.780: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-3042 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 14 23:44:26.781: INFO: >>> kubeConfig: /root/.kube/config I0214 23:44:26.828979 10 log.go:172] (0xc001728000) (0xc002d4eb40) Create stream I0214 23:44:26.829151 10 log.go:172] (0xc001728000) (0xc002d4eb40) Stream added, broadcasting: 1 I0214 23:44:26.840026 10 log.go:172] (0xc001728000) Reply frame received for 1 I0214 23:44:26.840169 10 log.go:172] (0xc001728000) (0xc002c80000) Create stream I0214 23:44:26.840194 10 log.go:172] (0xc001728000) (0xc002c80000) Stream added, broadcasting: 3 I0214 23:44:26.842064 10 log.go:172] (0xc001728000) Reply frame received for 3 I0214 23:44:26.842090 10 log.go:172] (0xc001728000) (0xc002b72a00) Create stream I0214 23:44:26.842098 10 log.go:172] (0xc001728000) (0xc002b72a00) Stream added, broadcasting: 5 I0214 23:44:26.843286 10 log.go:172] (0xc001728000) Reply frame received for 5 I0214 23:44:26.951899 10 log.go:172] (0xc001728000) Data frame received for 3 I0214 23:44:26.952052 10 log.go:172] (0xc002c80000) (3) Data frame handling I0214 23:44:26.952135 10 log.go:172] (0xc002c80000) (3) Data frame sent I0214 23:44:27.040517 10 log.go:172] (0xc001728000) (0xc002c80000) Stream removed, broadcasting: 3 I0214 23:44:27.040743 10 log.go:172] (0xc001728000) (0xc002b72a00) Stream removed, broadcasting: 5 I0214 23:44:27.040953 10 log.go:172] (0xc001728000) Data frame received for 1 I0214 23:44:27.041225 10 log.go:172] (0xc002d4eb40) (1) Data frame handling I0214 23:44:27.041281 10 log.go:172] (0xc002d4eb40) (1) Data frame sent I0214 23:44:27.041363 10 log.go:172] (0xc001728000) (0xc002d4eb40) Stream removed, broadcasting: 1 I0214 23:44:27.041463 10 log.go:172] (0xc001728000) Go away received I0214 23:44:27.041908 10 log.go:172] (0xc001728000) (0xc002d4eb40) Stream removed, broadcasting: 1 I0214 23:44:27.041947 10 log.go:172] (0xc001728000) (0xc002c80000) Stream removed, broadcasting: 3 I0214 23:44:27.041967 10 log.go:172] (0xc001728000) (0xc002b72a00) Stream removed, broadcasting: 5 Feb 14 23:44:27.042: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:44:27.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3042" for this suite. • [SLOW TEST:39.249 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":280,"completed":12,"skipped":234,"failed":0} S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:44:27.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:44:41.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2823" for this suite. • [SLOW TEST:14.545 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":280,"completed":13,"skipped":235,"failed":0} SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:44:41.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Feb 14 23:44:41.800: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:44:53.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6961" for this suite. • [SLOW TEST:12.187 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":280,"completed":14,"skipped":239,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:44:53.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9617.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9617.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 14 23:45:04.292: INFO: DNS probes using dns-9617/dns-test-bc04713c-73bf-45d9-b9a6-28a24f94b53c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:45:04.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9617" for this suite. • [SLOW TEST:10.702 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":280,"completed":15,"skipped":241,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:45:04.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 14 23:45:08.991: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 14 23:45:11.004: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320709, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320709, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320709, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320708, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 23:45:13.010: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320709, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320709, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320709, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320708, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 23:45:15.011: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320709, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320709, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320709, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320708, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 23:45:17.012: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320709, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320709, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320709, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717320708, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 14 23:45:20.121: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:45:20.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4476" for this suite. STEP: Destroying namespace "webhook-4476-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:15.984 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":280,"completed":16,"skipped":245,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:45:20.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0214 23:45:31.561509 10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 14 23:45:31.561: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:45:31.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9857" for this suite. • [SLOW TEST:11.171 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":280,"completed":17,"skipped":250,"failed":0} [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:45:31.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 14 23:45:46.035: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 23:45:46.067: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 23:45:48.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 23:45:48.073: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 23:45:50.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 23:45:50.073: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 23:45:52.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 23:45:52.075: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 23:45:54.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 23:45:54.077: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 23:45:56.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 23:45:56.074: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 23:45:58.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 23:45:58.073: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 23:46:00.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 23:46:00.077: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 23:46:02.068: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 23:46:02.075: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 23:46:04.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 23:46:04.086: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:46:04.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2241" for this suite. • [SLOW TEST:32.429 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":280,"completed":18,"skipped":250,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:46:04.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 14 23:46:04.363: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5894 /api/v1/namespaces/watch-5894/configmaps/e2e-watch-test-label-changed 0a76437a-cf92-4fcc-84cf-3299245c7fd0 8472616 0 2020-02-14 23:46:04 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 14 23:46:04.364: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5894 /api/v1/namespaces/watch-5894/configmaps/e2e-watch-test-label-changed 0a76437a-cf92-4fcc-84cf-3299245c7fd0 8472617 0 2020-02-14 23:46:04 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 14 23:46:04.364: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5894 /api/v1/namespaces/watch-5894/configmaps/e2e-watch-test-label-changed 0a76437a-cf92-4fcc-84cf-3299245c7fd0 8472618 0 2020-02-14 23:46:04 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 14 23:46:14.552: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5894 /api/v1/namespaces/watch-5894/configmaps/e2e-watch-test-label-changed 0a76437a-cf92-4fcc-84cf-3299245c7fd0 8472657 0 2020-02-14 23:46:04 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 14 23:46:14.553: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5894 /api/v1/namespaces/watch-5894/configmaps/e2e-watch-test-label-changed 0a76437a-cf92-4fcc-84cf-3299245c7fd0 8472658 0 2020-02-14 23:46:04 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 14 23:46:14.553: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5894 /api/v1/namespaces/watch-5894/configmaps/e2e-watch-test-label-changed 0a76437a-cf92-4fcc-84cf-3299245c7fd0 8472659 0 2020-02-14 23:46:04 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:46:14.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5894" for this suite. • [SLOW TEST:10.473 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":280,"completed":19,"skipped":259,"failed":0} [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:46:14.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-9d32a414-e1f8-43c7-b351-e63e60fcd139 in namespace container-probe-7743 Feb 14 23:46:20.741: INFO: Started pod liveness-9d32a414-e1f8-43c7-b351-e63e60fcd139 in namespace container-probe-7743 STEP: checking the pod's current state and verifying that restartCount is present Feb 14 23:46:20.743: INFO: Initial restart count of pod liveness-9d32a414-e1f8-43c7-b351-e63e60fcd139 is 0 Feb 14 23:46:34.839: INFO: Restart count of pod container-probe-7743/liveness-9d32a414-e1f8-43c7-b351-e63e60fcd139 is now 1 (14.095438336s elapsed) Feb 14 23:46:56.940: INFO: Restart count of pod container-probe-7743/liveness-9d32a414-e1f8-43c7-b351-e63e60fcd139 is now 2 (36.196396773s elapsed) Feb 14 23:47:17.013: INFO: Restart count of pod container-probe-7743/liveness-9d32a414-e1f8-43c7-b351-e63e60fcd139 is now 3 (56.269627307s elapsed) Feb 14 23:47:37.123: INFO: Restart count of pod container-probe-7743/liveness-9d32a414-e1f8-43c7-b351-e63e60fcd139 is now 4 (1m16.379352866s elapsed) Feb 14 23:48:37.484: INFO: Restart count of pod container-probe-7743/liveness-9d32a414-e1f8-43c7-b351-e63e60fcd139 is now 5 (2m16.740974942s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:48:37.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7743" for this suite. • [SLOW TEST:143.044 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":280,"completed":20,"skipped":259,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:48:37.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Feb 14 23:48:37.776: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 14 23:48:37.788: INFO: Waiting for terminating namespaces to be deleted... Feb 14 23:48:37.792: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 14 23:48:37.813: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 14 23:48:37.813: INFO: Container kube-proxy ready: true, restart count 0 Feb 14 23:48:37.813: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 14 23:48:37.813: INFO: Container weave ready: true, restart count 1 Feb 14 23:48:37.813: INFO: Container weave-npc ready: true, restart count 0 Feb 14 23:48:37.813: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 14 23:48:37.834: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 14 23:48:37.834: INFO: Container coredns ready: true, restart count 0 Feb 14 23:48:37.834: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 14 23:48:37.834: INFO: Container coredns ready: true, restart count 0 Feb 14 23:48:37.834: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 14 23:48:37.834: INFO: Container kube-controller-manager ready: true, restart count 7 Feb 14 23:48:37.834: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 14 23:48:37.834: INFO: Container kube-proxy ready: true, restart count 0 Feb 14 23:48:37.834: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 14 23:48:37.834: INFO: Container weave ready: true, restart count 0 Feb 14 23:48:37.834: INFO: Container weave-npc ready: true, restart count 0 Feb 14 23:48:37.834: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 14 23:48:37.834: INFO: Container kube-scheduler ready: true, restart count 11 Feb 14 23:48:37.834: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 14 23:48:37.834: INFO: Container kube-apiserver ready: true, restart count 1 Feb 14 23:48:37.834: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 14 23:48:37.834: INFO: Container etcd ready: true, restart count 1 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-114cbbc3-cc89-4613-a177-e3f3d7357479 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-114cbbc3-cc89-4613-a177-e3f3d7357479 off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-114cbbc3-cc89-4613-a177-e3f3d7357479 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:48:56.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-764" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:18.667 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":280,"completed":21,"skipped":280,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:48:56.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-configmap-rncc STEP: Creating a pod to test atomic-volume-subpath Feb 14 23:48:56.504: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rncc" in namespace "subpath-2068" to be "success or failure" Feb 14 23:48:56.556: INFO: Pod "pod-subpath-test-configmap-rncc": Phase="Pending", Reason="", readiness=false. Elapsed: 52.059409ms Feb 14 23:48:58.579: INFO: Pod "pod-subpath-test-configmap-rncc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075178022s Feb 14 23:49:00.590: INFO: Pod "pod-subpath-test-configmap-rncc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086249688s Feb 14 23:49:02.598: INFO: Pod "pod-subpath-test-configmap-rncc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093573234s Feb 14 23:49:04.603: INFO: Pod "pod-subpath-test-configmap-rncc": Phase="Running", Reason="", readiness=true. Elapsed: 8.099382099s Feb 14 23:49:06.688: INFO: Pod "pod-subpath-test-configmap-rncc": Phase="Running", Reason="", readiness=true. Elapsed: 10.183607919s Feb 14 23:49:08.698: INFO: Pod "pod-subpath-test-configmap-rncc": Phase="Running", Reason="", readiness=true. Elapsed: 12.194230247s Feb 14 23:49:10.709: INFO: Pod "pod-subpath-test-configmap-rncc": Phase="Running", Reason="", readiness=true. Elapsed: 14.204644081s Feb 14 23:49:12.718: INFO: Pod "pod-subpath-test-configmap-rncc": Phase="Running", Reason="", readiness=true. Elapsed: 16.213626683s Feb 14 23:49:14.725: INFO: Pod "pod-subpath-test-configmap-rncc": Phase="Running", Reason="", readiness=true. Elapsed: 18.221223507s Feb 14 23:49:16.734: INFO: Pod "pod-subpath-test-configmap-rncc": Phase="Running", Reason="", readiness=true. Elapsed: 20.229962771s Feb 14 23:49:18.746: INFO: Pod "pod-subpath-test-configmap-rncc": Phase="Running", Reason="", readiness=true. Elapsed: 22.241857326s Feb 14 23:49:20.755: INFO: Pod "pod-subpath-test-configmap-rncc": Phase="Running", Reason="", readiness=true. Elapsed: 24.251039929s Feb 14 23:49:22.763: INFO: Pod "pod-subpath-test-configmap-rncc": Phase="Running", Reason="", readiness=true. Elapsed: 26.258603465s Feb 14 23:49:24.770: INFO: Pod "pod-subpath-test-configmap-rncc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.266258299s STEP: Saw pod success Feb 14 23:49:24.770: INFO: Pod "pod-subpath-test-configmap-rncc" satisfied condition "success or failure" Feb 14 23:49:24.775: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-rncc container test-container-subpath-configmap-rncc: STEP: delete the pod Feb 14 23:49:24.817: INFO: Waiting for pod pod-subpath-test-configmap-rncc to disappear Feb 14 23:49:24.829: INFO: Pod pod-subpath-test-configmap-rncc no longer exists STEP: Deleting pod pod-subpath-test-configmap-rncc Feb 14 23:49:24.830: INFO: Deleting pod "pod-subpath-test-configmap-rncc" in namespace "subpath-2068" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:49:24.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2068" for this suite. • [SLOW TEST:28.558 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":280,"completed":22,"skipped":301,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:49:24.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:49:24.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-214" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":280,"completed":23,"skipped":376,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:49:24.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 14 23:49:25.111: INFO: Creating ReplicaSet my-hostname-basic-f2feadd9-cec7-44af-a212-669a16a762ad Feb 14 23:49:25.236: INFO: Pod name my-hostname-basic-f2feadd9-cec7-44af-a212-669a16a762ad: Found 0 pods out of 1 Feb 14 23:49:30.388: INFO: Pod name my-hostname-basic-f2feadd9-cec7-44af-a212-669a16a762ad: Found 1 pods out of 1 Feb 14 23:49:30.388: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-f2feadd9-cec7-44af-a212-669a16a762ad" is running Feb 14 23:49:32.401: INFO: Pod "my-hostname-basic-f2feadd9-cec7-44af-a212-669a16a762ad-jv25w" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 23:49:25 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 23:49:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f2feadd9-cec7-44af-a212-669a16a762ad]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 23:49:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f2feadd9-cec7-44af-a212-669a16a762ad]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 23:49:25 +0000 UTC Reason: Message:}]) Feb 14 23:49:32.401: INFO: Trying to dial the pod Feb 14 23:49:37.641: INFO: Controller my-hostname-basic-f2feadd9-cec7-44af-a212-669a16a762ad: Got expected result from replica 1 [my-hostname-basic-f2feadd9-cec7-44af-a212-669a16a762ad-jv25w]: "my-hostname-basic-f2feadd9-cec7-44af-a212-669a16a762ad-jv25w", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:49:37.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4523" for this suite. • [SLOW TEST:12.671 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":280,"completed":24,"skipped":399,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:49:37.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating all guestbook components Feb 14 23:49:37.795: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Feb 14 23:49:37.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9804' Feb 14 23:49:40.851: INFO: stderr: "" Feb 14 23:49:40.851: INFO: stdout: "service/agnhost-slave created\n" Feb 14 23:49:40.853: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Feb 14 23:49:40.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9804' Feb 14 23:49:41.389: INFO: stderr: "" Feb 14 23:49:41.389: INFO: stdout: "service/agnhost-master created\n" Feb 14 23:49:41.390: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 14 23:49:41.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9804' Feb 14 23:49:41.988: INFO: stderr: "" Feb 14 23:49:41.988: INFO: stdout: "service/frontend created\n" Feb 14 23:49:41.989: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Feb 14 23:49:41.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9804' Feb 14 23:49:42.439: INFO: stderr: "" Feb 14 23:49:42.439: INFO: stdout: "deployment.apps/frontend created\n" Feb 14 23:49:42.440: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 14 23:49:42.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9804' Feb 14 23:49:42.931: INFO: stderr: "" Feb 14 23:49:42.931: INFO: stdout: "deployment.apps/agnhost-master created\n" Feb 14 23:49:42.933: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 14 23:49:42.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9804' Feb 14 23:49:43.996: INFO: stderr: "" Feb 14 23:49:43.996: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Feb 14 23:49:43.997: INFO: Waiting for all frontend pods to be Running. Feb 14 23:50:09.052: INFO: Waiting for frontend to serve content. Feb 14 23:50:09.109: INFO: Trying to add a new entry to the guestbook. Feb 14 23:50:09.127: INFO: Verifying that added entry can be retrieved. Feb 14 23:50:09.147: INFO: Failed to get response from guestbook. err: , response: {"data":""} STEP: using delete to clean up resources Feb 14 23:50:14.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9804' Feb 14 23:50:14.458: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 14 23:50:14.459: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Feb 14 23:50:14.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9804' Feb 14 23:50:14.659: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 14 23:50:14.660: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Feb 14 23:50:14.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9804' Feb 14 23:50:14.818: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 14 23:50:14.818: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 14 23:50:14.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9804' Feb 14 23:50:14.927: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 14 23:50:14.927: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 14 23:50:14.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9804' Feb 14 23:50:15.055: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 14 23:50:15.055: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Feb 14 23:50:15.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9804' Feb 14 23:50:15.234: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 14 23:50:15.234: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:50:15.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9804" for this suite. • [SLOW TEST:37.724 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":280,"completed":25,"skipped":441,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:50:15.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Feb 14 23:50:15.658: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 14 23:50:15.676: INFO: Waiting for terminating namespaces to be deleted... Feb 14 23:50:15.686: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 14 23:50:17.396: INFO: frontend-6c5f89d5d4-qsdjf from kubectl-9804 started at 2020-02-14 23:49:42 +0000 UTC (1 container statuses recorded) Feb 14 23:50:17.396: INFO: Container guestbook-frontend ready: true, restart count 0 Feb 14 23:50:17.396: INFO: agnhost-master-74c46fb7d4-2pvds from kubectl-9804 started at 2020-02-14 23:49:44 +0000 UTC (1 container statuses recorded) Feb 14 23:50:17.396: INFO: Container master ready: true, restart count 0 Feb 14 23:50:17.397: INFO: agnhost-slave-774cfc759f-jkr2h from kubectl-9804 started at 2020-02-14 23:49:45 +0000 UTC (1 container statuses recorded) Feb 14 23:50:17.397: INFO: Container slave ready: true, restart count 0 Feb 14 23:50:17.397: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 14 23:50:17.397: INFO: Container kube-proxy ready: true, restart count 0 Feb 14 23:50:17.397: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 14 23:50:17.397: INFO: Container weave ready: true, restart count 1 Feb 14 23:50:17.397: INFO: Container weave-npc ready: true, restart count 0 Feb 14 23:50:17.397: INFO: frontend-6c5f89d5d4-lrznv from kubectl-9804 started at 2020-02-14 23:49:42 +0000 UTC (1 container statuses recorded) Feb 14 23:50:17.397: INFO: Container guestbook-frontend ready: true, restart count 0 Feb 14 23:50:17.397: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 14 23:50:17.501: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 14 23:50:17.501: INFO: Container coredns ready: true, restart count 0 Feb 14 23:50:17.501: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 14 23:50:17.501: INFO: Container coredns ready: true, restart count 0 Feb 14 23:50:17.501: INFO: frontend-6c5f89d5d4-9dpdk from kubectl-9804 started at 2020-02-14 23:49:42 +0000 UTC (1 container statuses recorded) Feb 14 23:50:17.502: INFO: Container guestbook-frontend ready: true, restart count 0 Feb 14 23:50:17.502: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 14 23:50:17.502: INFO: Container kube-controller-manager ready: true, restart count 7 Feb 14 23:50:17.502: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 14 23:50:17.502: INFO: Container kube-proxy ready: true, restart count 0 Feb 14 23:50:17.502: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 14 23:50:17.502: INFO: Container weave ready: true, restart count 0 Feb 14 23:50:17.502: INFO: Container weave-npc ready: true, restart count 0 Feb 14 23:50:17.502: INFO: agnhost-slave-774cfc759f-dpjts from kubectl-9804 started at 2020-02-14 23:49:44 +0000 UTC (1 container statuses recorded) Feb 14 23:50:17.502: INFO: Container slave ready: true, restart count 0 Feb 14 23:50:17.502: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 14 23:50:17.502: INFO: Container kube-scheduler ready: true, restart count 11 Feb 14 23:50:17.502: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 14 23:50:17.502: INFO: Container kube-apiserver ready: true, restart count 1 Feb 14 23:50:17.502: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 14 23:50:17.502: INFO: Container etcd ready: true, restart count 1 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-2be4ba61-3edd-4f02-8043-fdaad364c529 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-2be4ba61-3edd-4f02-8043-fdaad364c529 off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-2be4ba61-3edd-4f02-8043-fdaad364c529 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:55:42.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1734" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:327.290 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":280,"completed":26,"skipped":448,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:55:42.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 14 23:55:42.755: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 14 23:55:47.763: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 14 23:55:49.773: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Feb 14 23:55:49.909: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3771 /apis/apps/v1/namespaces/deployment-3771/deployments/test-cleanup-deployment 61d15421-f396-4488-9d22-655d955f7333 8474368 1 2020-02-14 23:55:49 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000895f68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Feb 14 23:55:49.923: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-3771 /apis/apps/v1/namespaces/deployment-3771/replicasets/test-cleanup-deployment-55ffc6b7b6 4b9d8eed-d8df-4742-bc29-027a711b63b2 8474370 1 2020-02-14 23:55:49 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 61d15421-f396-4488-9d22-655d955f7333 0xc000766097 0xc000766098}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0007661a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 14 23:55:49.923: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Feb 14 23:55:49.924: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-3771 /apis/apps/v1/namespaces/deployment-3771/replicasets/test-cleanup-controller a78f26bc-6e94-4aac-b90d-c876f51f5b95 8474369 1 2020-02-14 23:55:42 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 61d15421-f396-4488-9d22-655d955f7333 0xc0009f5e97 0xc0009f5e98}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0009f5f68 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 14 23:55:49.985: INFO: Pod "test-cleanup-controller-6qh4h" is available: &Pod{ObjectMeta:{test-cleanup-controller-6qh4h test-cleanup-controller- deployment-3771 /api/v1/namespaces/deployment-3771/pods/test-cleanup-controller-6qh4h 1e98bf24-0986-41e0-8488-d47e2524b5d1 8474362 0 2020-02-14 23:55:42 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller a78f26bc-6e94-4aac-b90d-c876f51f5b95 0xc002b664d7 0xc002b664d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ps97k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ps97k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ps97k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 23:55:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 23:55:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 23:55:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 23:55:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-14 23:55:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-14 23:55:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://8d04f08473597ee90d7189688e80c34c637dd6309db13cdf09d0e4c651c59e13,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 23:55:49.986: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-lcgxk" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-lcgxk test-cleanup-deployment-55ffc6b7b6- deployment-3771 /api/v1/namespaces/deployment-3771/pods/test-cleanup-deployment-55ffc6b7b6-lcgxk 3b9df2bc-8212-4d04-8d6d-70f5ad6662b3 8474374 0 2020-02-14 23:55:49 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 4b9d8eed-d8df-4742-bc29-027a711b63b2 0xc002b666a7 0xc002b666a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ps97k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ps97k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ps97k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 23:55:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:55:49.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3771" for this suite. • [SLOW TEST:7.512 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":280,"completed":27,"skipped":468,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:55:50.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-secret-prn9 STEP: Creating a pod to test atomic-volume-subpath Feb 14 23:55:50.616: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-prn9" in namespace "subpath-1021" to be "success or failure" Feb 14 23:55:50.673: INFO: Pod "pod-subpath-test-secret-prn9": Phase="Pending", Reason="", readiness=false. Elapsed: 56.115252ms Feb 14 23:55:52.744: INFO: Pod "pod-subpath-test-secret-prn9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127084956s Feb 14 23:55:54.749: INFO: Pod "pod-subpath-test-secret-prn9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132304439s Feb 14 23:55:56.758: INFO: Pod "pod-subpath-test-secret-prn9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141093238s Feb 14 23:55:58.773: INFO: Pod "pod-subpath-test-secret-prn9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156016971s Feb 14 23:56:00.783: INFO: Pod "pod-subpath-test-secret-prn9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.166439948s Feb 14 23:56:02.791: INFO: Pod "pod-subpath-test-secret-prn9": Phase="Running", Reason="", readiness=true. Elapsed: 12.17496344s Feb 14 23:56:04.798: INFO: Pod "pod-subpath-test-secret-prn9": Phase="Running", Reason="", readiness=true. Elapsed: 14.181456555s Feb 14 23:56:07.124: INFO: Pod "pod-subpath-test-secret-prn9": Phase="Running", Reason="", readiness=true. Elapsed: 16.507524552s Feb 14 23:56:09.134: INFO: Pod "pod-subpath-test-secret-prn9": Phase="Running", Reason="", readiness=true. Elapsed: 18.517060244s Feb 14 23:56:11.146: INFO: Pod "pod-subpath-test-secret-prn9": Phase="Running", Reason="", readiness=true. Elapsed: 20.529607306s Feb 14 23:56:13.156: INFO: Pod "pod-subpath-test-secret-prn9": Phase="Running", Reason="", readiness=true. Elapsed: 22.539713866s Feb 14 23:56:15.167: INFO: Pod "pod-subpath-test-secret-prn9": Phase="Running", Reason="", readiness=true. Elapsed: 24.550415689s Feb 14 23:56:17.176: INFO: Pod "pod-subpath-test-secret-prn9": Phase="Running", Reason="", readiness=true. Elapsed: 26.559720524s Feb 14 23:56:19.184: INFO: Pod "pod-subpath-test-secret-prn9": Phase="Running", Reason="", readiness=true. Elapsed: 28.567734548s Feb 14 23:56:21.193: INFO: Pod "pod-subpath-test-secret-prn9": Phase="Running", Reason="", readiness=true. Elapsed: 30.576010245s Feb 14 23:56:23.202: INFO: Pod "pod-subpath-test-secret-prn9": Phase="Running", Reason="", readiness=true. Elapsed: 32.585959948s Feb 14 23:56:25.209: INFO: Pod "pod-subpath-test-secret-prn9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.592761113s STEP: Saw pod success Feb 14 23:56:25.209: INFO: Pod "pod-subpath-test-secret-prn9" satisfied condition "success or failure" Feb 14 23:56:25.213: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-prn9 container test-container-subpath-secret-prn9: STEP: delete the pod Feb 14 23:56:25.583: INFO: Waiting for pod pod-subpath-test-secret-prn9 to disappear Feb 14 23:56:25.622: INFO: Pod pod-subpath-test-secret-prn9 no longer exists STEP: Deleting pod pod-subpath-test-secret-prn9 Feb 14 23:56:25.623: INFO: Deleting pod "pod-subpath-test-secret-prn9" in namespace "subpath-1021" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:56:25.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1021" for this suite. • [SLOW TEST:35.481 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":280,"completed":28,"skipped":483,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:56:25.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:56:42.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3323" for this suite. • [SLOW TEST:16.465 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":280,"completed":29,"skipped":504,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:56:42.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Feb 14 23:56:42.291: INFO: PodSpec: initContainers in spec.initContainers Feb 14 23:57:39.355: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-408d829e-6a7d-45d8-ac7f-2655ed1ac8ed", GenerateName:"", Namespace:"init-container-9100", SelfLink:"/api/v1/namespaces/init-container-9100/pods/pod-init-408d829e-6a7d-45d8-ac7f-2655ed1ac8ed", UID:"5b730209-2198-4f57-a742-a4552a9ec7ff", ResourceVersion:"8474758", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717321402, loc:(*time.Location)(0x7e52ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"291740310"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-4772d", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002068000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4772d", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4772d", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4772d", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001eb00c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002512000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001eb0150)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001eb0170)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001eb0178), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001eb017c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321402, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321402, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321402, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321402, loc:(*time.Location)(0x7e52ca0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc002c90020), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002aec070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002aec0e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://a06881da5f0363abd8937d5b1902c777326e985648b8ae6ddcf633591447662d", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c90060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c90040), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc001eb01ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 23:57:39.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9100" for this suite. • [SLOW TEST:57.242 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":280,"completed":30,"skipped":518,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 23:57:39.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 14 23:57:39.513: INFO: (0) /api/v1/nodes/jerma-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 15.605249ms)
Feb 14 23:57:39.576: INFO: (1) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 62.99839ms)
Feb 14 23:57:39.589: INFO: (2) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.722025ms)
Feb 14 23:57:39.597: INFO: (3) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.694305ms)
Feb 14 23:57:39.602: INFO: (4) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.850691ms)
Feb 14 23:57:39.607: INFO: (5) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.095817ms)
Feb 14 23:57:39.612: INFO: (6) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.61667ms)
Feb 14 23:57:39.618: INFO: (7) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.249837ms)
Feb 14 23:57:39.627: INFO: (8) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.046648ms)
Feb 14 23:57:39.637: INFO: (9) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.803321ms)
Feb 14 23:57:39.643: INFO: (10) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.228091ms)
Feb 14 23:57:39.649: INFO: (11) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.436412ms)
Feb 14 23:57:39.655: INFO: (12) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.67071ms)
Feb 14 23:57:39.662: INFO: (13) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.076235ms)
Feb 14 23:57:39.669: INFO: (14) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.666393ms)
Feb 14 23:57:39.675: INFO: (15) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.251058ms)
Feb 14 23:57:39.680: INFO: (16) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.484131ms)
Feb 14 23:57:39.686: INFO: (17) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.872058ms)
Feb 14 23:57:39.692: INFO: (18) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.674947ms)
Feb 14 23:57:39.699: INFO: (19) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.370911ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 23:57:39.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6026" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":280,"completed":31,"skipped":539,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 23:57:39.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 23:57:39.826: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 23:57:39.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5928" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":280,"completed":32,"skipped":545,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 23:57:40.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a replication controller
Feb 14 23:57:40.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9713'
Feb 14 23:57:40.442: INFO: stderr: ""
Feb 14 23:57:40.442: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 23:57:40.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9713'
Feb 14 23:57:40.713: INFO: stderr: ""
Feb 14 23:57:40.713: INFO: stdout: "update-demo-nautilus-jvbz9 update-demo-nautilus-tq8th "
Feb 14 23:57:40.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvbz9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9713'
Feb 14 23:57:40.840: INFO: stderr: ""
Feb 14 23:57:40.840: INFO: stdout: ""
Feb 14 23:57:40.840: INFO: update-demo-nautilus-jvbz9 is created but not running
Feb 14 23:57:45.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9713'
Feb 14 23:57:46.445: INFO: stderr: ""
Feb 14 23:57:46.445: INFO: stdout: "update-demo-nautilus-jvbz9 update-demo-nautilus-tq8th "
Feb 14 23:57:46.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvbz9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9713'
Feb 14 23:57:46.847: INFO: stderr: ""
Feb 14 23:57:46.847: INFO: stdout: ""
Feb 14 23:57:46.847: INFO: update-demo-nautilus-jvbz9 is created but not running
Feb 14 23:57:51.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9713'
Feb 14 23:57:51.983: INFO: stderr: ""
Feb 14 23:57:51.983: INFO: stdout: "update-demo-nautilus-jvbz9 update-demo-nautilus-tq8th "
Feb 14 23:57:51.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvbz9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9713'
Feb 14 23:57:52.136: INFO: stderr: ""
Feb 14 23:57:52.136: INFO: stdout: "true"
Feb 14 23:57:52.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvbz9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9713'
Feb 14 23:57:52.394: INFO: stderr: ""
Feb 14 23:57:52.395: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 23:57:52.395: INFO: validating pod update-demo-nautilus-jvbz9
Feb 14 23:57:52.429: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 23:57:52.429: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 23:57:52.429: INFO: update-demo-nautilus-jvbz9 is verified up and running
Feb 14 23:57:52.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tq8th -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9713'
Feb 14 23:57:52.663: INFO: stderr: ""
Feb 14 23:57:52.663: INFO: stdout: "true"
Feb 14 23:57:52.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tq8th -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9713'
Feb 14 23:57:52.793: INFO: stderr: ""
Feb 14 23:57:52.794: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 23:57:52.794: INFO: validating pod update-demo-nautilus-tq8th
Feb 14 23:57:52.805: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 23:57:52.805: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 23:57:52.805: INFO: update-demo-nautilus-tq8th is verified up and running
STEP: scaling down the replication controller
Feb 14 23:57:52.811: INFO: scanned /root for discovery docs: 
Feb 14 23:57:52.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9713'
Feb 14 23:57:53.942: INFO: stderr: ""
Feb 14 23:57:53.943: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 23:57:53.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9713'
Feb 14 23:57:54.129: INFO: stderr: ""
Feb 14 23:57:54.130: INFO: stdout: "update-demo-nautilus-jvbz9 update-demo-nautilus-tq8th "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 14 23:57:59.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9713'
Feb 14 23:57:59.267: INFO: stderr: ""
Feb 14 23:57:59.267: INFO: stdout: "update-demo-nautilus-jvbz9 "
Feb 14 23:57:59.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvbz9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9713'
Feb 14 23:57:59.383: INFO: stderr: ""
Feb 14 23:57:59.383: INFO: stdout: "true"
Feb 14 23:57:59.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvbz9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9713'
Feb 14 23:57:59.496: INFO: stderr: ""
Feb 14 23:57:59.496: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 23:57:59.496: INFO: validating pod update-demo-nautilus-jvbz9
Feb 14 23:57:59.501: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 23:57:59.501: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 23:57:59.501: INFO: update-demo-nautilus-jvbz9 is verified up and running
STEP: scaling up the replication controller
Feb 14 23:57:59.504: INFO: scanned /root for discovery docs: 
Feb 14 23:57:59.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9713'
Feb 14 23:58:00.895: INFO: stderr: ""
Feb 14 23:58:00.895: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 23:58:00.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9713'
Feb 14 23:58:01.039: INFO: stderr: ""
Feb 14 23:58:01.039: INFO: stdout: "update-demo-nautilus-jvbz9 update-demo-nautilus-k5gmp "
Feb 14 23:58:01.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvbz9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9713'
Feb 14 23:58:01.145: INFO: stderr: ""
Feb 14 23:58:01.145: INFO: stdout: "true"
Feb 14 23:58:01.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvbz9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9713'
Feb 14 23:58:01.252: INFO: stderr: ""
Feb 14 23:58:01.252: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 23:58:01.252: INFO: validating pod update-demo-nautilus-jvbz9
Feb 14 23:58:01.271: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 23:58:01.271: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 23:58:01.271: INFO: update-demo-nautilus-jvbz9 is verified up and running
Feb 14 23:58:01.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k5gmp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9713'
Feb 14 23:58:01.426: INFO: stderr: ""
Feb 14 23:58:01.427: INFO: stdout: ""
Feb 14 23:58:01.427: INFO: update-demo-nautilus-k5gmp is created but not running
Feb 14 23:58:06.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9713'
Feb 14 23:58:06.600: INFO: stderr: ""
Feb 14 23:58:06.600: INFO: stdout: "update-demo-nautilus-jvbz9 update-demo-nautilus-k5gmp "
Feb 14 23:58:06.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvbz9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9713'
Feb 14 23:58:06.705: INFO: stderr: ""
Feb 14 23:58:06.706: INFO: stdout: "true"
Feb 14 23:58:06.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvbz9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9713'
Feb 14 23:58:06.814: INFO: stderr: ""
Feb 14 23:58:06.814: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 23:58:06.814: INFO: validating pod update-demo-nautilus-jvbz9
Feb 14 23:58:06.819: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 23:58:06.819: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 23:58:06.819: INFO: update-demo-nautilus-jvbz9 is verified up and running
Feb 14 23:58:06.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k5gmp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9713'
Feb 14 23:58:06.949: INFO: stderr: ""
Feb 14 23:58:06.949: INFO: stdout: "true"
Feb 14 23:58:06.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k5gmp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9713'
Feb 14 23:58:07.106: INFO: stderr: ""
Feb 14 23:58:07.106: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 23:58:07.106: INFO: validating pod update-demo-nautilus-k5gmp
Feb 14 23:58:07.115: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 23:58:07.116: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 23:58:07.116: INFO: update-demo-nautilus-k5gmp is verified up and running
STEP: using delete to clean up resources
Feb 14 23:58:07.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9713'
Feb 14 23:58:07.239: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 23:58:07.240: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 14 23:58:07.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9713'
Feb 14 23:58:07.325: INFO: stderr: "No resources found in kubectl-9713 namespace.\n"
Feb 14 23:58:07.325: INFO: stdout: ""
Feb 14 23:58:07.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9713 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 14 23:58:07.401: INFO: stderr: ""
Feb 14 23:58:07.401: INFO: stdout: "update-demo-nautilus-jvbz9\nupdate-demo-nautilus-k5gmp\n"
Feb 14 23:58:07.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9713'
Feb 14 23:58:08.623: INFO: stderr: "No resources found in kubectl-9713 namespace.\n"
Feb 14 23:58:08.623: INFO: stdout: ""
Feb 14 23:58:08.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9713 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 14 23:58:08.761: INFO: stderr: ""
Feb 14 23:58:08.762: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 23:58:08.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9713" for this suite.

• [SLOW TEST:28.768 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":280,"completed":33,"skipped":560,"failed":0}
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 23:58:08.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 23:58:09.484: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 14 23:58:09.691: INFO: Number of nodes with available pods: 0
Feb 14 23:58:09.691: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:58:10.707: INFO: Number of nodes with available pods: 0
Feb 14 23:58:10.707: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:58:12.490: INFO: Number of nodes with available pods: 0
Feb 14 23:58:12.491: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:58:13.077: INFO: Number of nodes with available pods: 0
Feb 14 23:58:13.077: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:58:13.702: INFO: Number of nodes with available pods: 0
Feb 14 23:58:13.702: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:58:14.754: INFO: Number of nodes with available pods: 0
Feb 14 23:58:14.754: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:58:16.800: INFO: Number of nodes with available pods: 0
Feb 14 23:58:16.800: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:58:17.708: INFO: Number of nodes with available pods: 0
Feb 14 23:58:17.708: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:58:18.728: INFO: Number of nodes with available pods: 1
Feb 14 23:58:18.728: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 14 23:58:19.706: INFO: Number of nodes with available pods: 1
Feb 14 23:58:19.706: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 14 23:58:20.710: INFO: Number of nodes with available pods: 2
Feb 14 23:58:20.710: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 14 23:58:20.789: INFO: Wrong image for pod: daemon-set-q6hw6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:20.789: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:21.809: INFO: Wrong image for pod: daemon-set-q6hw6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:21.809: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:23.041: INFO: Wrong image for pod: daemon-set-q6hw6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:23.041: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:23.806: INFO: Wrong image for pod: daemon-set-q6hw6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:23.806: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:24.806: INFO: Wrong image for pod: daemon-set-q6hw6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:24.806: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:25.808: INFO: Wrong image for pod: daemon-set-q6hw6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:25.808: INFO: Pod daemon-set-q6hw6 is not available
Feb 14 23:58:25.808: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:26.811: INFO: Wrong image for pod: daemon-set-q6hw6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:26.811: INFO: Pod daemon-set-q6hw6 is not available
Feb 14 23:58:26.811: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:27.812: INFO: Wrong image for pod: daemon-set-q6hw6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:27.812: INFO: Pod daemon-set-q6hw6 is not available
Feb 14 23:58:27.812: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:28.808: INFO: Wrong image for pod: daemon-set-q6hw6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:28.808: INFO: Pod daemon-set-q6hw6 is not available
Feb 14 23:58:28.808: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:29.806: INFO: Wrong image for pod: daemon-set-q6hw6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:29.806: INFO: Pod daemon-set-q6hw6 is not available
Feb 14 23:58:29.806: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:30.807: INFO: Wrong image for pod: daemon-set-q6hw6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:30.807: INFO: Pod daemon-set-q6hw6 is not available
Feb 14 23:58:30.807: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:31.806: INFO: Wrong image for pod: daemon-set-q6hw6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:31.806: INFO: Pod daemon-set-q6hw6 is not available
Feb 14 23:58:31.806: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:32.805: INFO: Wrong image for pod: daemon-set-q6hw6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:32.805: INFO: Pod daemon-set-q6hw6 is not available
Feb 14 23:58:32.805: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:33.882: INFO: Pod daemon-set-hgnn8 is not available
Feb 14 23:58:33.882: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:34.806: INFO: Pod daemon-set-hgnn8 is not available
Feb 14 23:58:34.806: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:35.811: INFO: Pod daemon-set-hgnn8 is not available
Feb 14 23:58:35.811: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:36.811: INFO: Pod daemon-set-hgnn8 is not available
Feb 14 23:58:36.811: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:37.804: INFO: Pod daemon-set-hgnn8 is not available
Feb 14 23:58:37.804: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:38.809: INFO: Pod daemon-set-hgnn8 is not available
Feb 14 23:58:38.809: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:39.809: INFO: Pod daemon-set-hgnn8 is not available
Feb 14 23:58:39.809: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:40.804: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:41.810: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:42.809: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:43.807: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:44.810: INFO: Wrong image for pod: daemon-set-sw2wj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 23:58:44.810: INFO: Pod daemon-set-sw2wj is not available
Feb 14 23:58:45.810: INFO: Pod daemon-set-nj2jh is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 14 23:58:45.834: INFO: Number of nodes with available pods: 1
Feb 14 23:58:45.834: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:58:46.847: INFO: Number of nodes with available pods: 1
Feb 14 23:58:46.847: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:58:47.897: INFO: Number of nodes with available pods: 1
Feb 14 23:58:47.897: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:58:49.319: INFO: Number of nodes with available pods: 1
Feb 14 23:58:49.319: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:58:49.862: INFO: Number of nodes with available pods: 1
Feb 14 23:58:49.862: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:58:50.858: INFO: Number of nodes with available pods: 1
Feb 14 23:58:50.859: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:58:51.861: INFO: Number of nodes with available pods: 2
Feb 14 23:58:51.861: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8661, will wait for the garbage collector to delete the pods
Feb 14 23:58:51.973: INFO: Deleting DaemonSet.extensions daemon-set took: 13.484623ms
Feb 14 23:58:52.375: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.646534ms
Feb 14 23:59:02.388: INFO: Number of nodes with available pods: 0
Feb 14 23:59:02.388: INFO: Number of running nodes: 0, number of available pods: 0
Feb 14 23:59:02.393: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8661/daemonsets","resourceVersion":"8475151"},"items":null}

Feb 14 23:59:02.397: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8661/pods","resourceVersion":"8475151"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 23:59:02.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8661" for this suite.

• [SLOW TEST:53.650 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":280,"completed":34,"skipped":564,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 23:59:02.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 14 23:59:02.598: INFO: Number of nodes with available pods: 0
Feb 14 23:59:02.598: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:59:03.629: INFO: Number of nodes with available pods: 0
Feb 14 23:59:03.630: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:59:05.888: INFO: Number of nodes with available pods: 0
Feb 14 23:59:05.888: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:59:07.102: INFO: Number of nodes with available pods: 0
Feb 14 23:59:07.103: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:59:07.699: INFO: Number of nodes with available pods: 0
Feb 14 23:59:07.699: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:59:08.635: INFO: Number of nodes with available pods: 0
Feb 14 23:59:08.636: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:59:09.609: INFO: Number of nodes with available pods: 0
Feb 14 23:59:09.609: INFO: Node jerma-node is running more than one daemon pod
Feb 14 23:59:11.126: INFO: Number of nodes with available pods: 1
Feb 14 23:59:11.126: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 14 23:59:11.687: INFO: Number of nodes with available pods: 1
Feb 14 23:59:11.687: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 14 23:59:12.608: INFO: Number of nodes with available pods: 1
Feb 14 23:59:12.608: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 14 23:59:13.626: INFO: Number of nodes with available pods: 2
Feb 14 23:59:13.627: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 14 23:59:13.692: INFO: Number of nodes with available pods: 2
Feb 14 23:59:13.692: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1036, will wait for the garbage collector to delete the pods
Feb 14 23:59:14.877: INFO: Deleting DaemonSet.extensions daemon-set took: 14.372579ms
Feb 14 23:59:15.278: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.016613ms
Feb 14 23:59:21.487: INFO: Number of nodes with available pods: 0
Feb 14 23:59:21.487: INFO: Number of running nodes: 0, number of available pods: 0
Feb 14 23:59:21.492: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1036/daemonsets","resourceVersion":"8475274"},"items":null}

Feb 14 23:59:21.496: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1036/pods","resourceVersion":"8475274"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 23:59:21.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1036" for this suite.

• [SLOW TEST:19.133 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":280,"completed":35,"skipped":572,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 23:59:21.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1863
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 14 23:59:21.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9061'
Feb 14 23:59:21.873: INFO: stderr: ""
Feb 14 23:59:21.873: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1868
Feb 14 23:59:21.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9061'
Feb 14 23:59:25.289: INFO: stderr: ""
Feb 14 23:59:25.289: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 23:59:25.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9061" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":280,"completed":36,"skipped":598,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 23:59:25.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb 14 23:59:26.482: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Feb 14 23:59:28.509: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321566, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321566, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321566, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321566, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 23:59:30.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321566, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321566, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321566, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321566, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 23:59:32.519: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321566, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321566, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321566, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321566, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 14 23:59:35.557: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 23:59:35.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 23:59:36.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-1237" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:11.585 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":280,"completed":37,"skipped":610,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 23:59:36.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 23:59:50.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8498" for this suite.

• [SLOW TEST:13.481 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":280,"completed":38,"skipped":617,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 23:59:50.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 23:59:50.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 23:59:58.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7374" for this suite.

• [SLOW TEST:8.189 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":280,"completed":39,"skipped":647,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 23:59:58.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 14 23:59:58.710: INFO: Waiting up to 5m0s for pod "pod-f95ebad6-698b-4290-9059-f854ffb51f8f" in namespace "emptydir-2653" to be "success or failure"
Feb 14 23:59:58.716: INFO: Pod "pod-f95ebad6-698b-4290-9059-f854ffb51f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.821658ms
Feb 15 00:00:00.727: INFO: Pod "pod-f95ebad6-698b-4290-9059-f854ffb51f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016381692s
Feb 15 00:00:02.732: INFO: Pod "pod-f95ebad6-698b-4290-9059-f854ffb51f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021437482s
Feb 15 00:00:04.737: INFO: Pod "pod-f95ebad6-698b-4290-9059-f854ffb51f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027234326s
Feb 15 00:00:06.743: INFO: Pod "pod-f95ebad6-698b-4290-9059-f854ffb51f8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.03256363s
STEP: Saw pod success
Feb 15 00:00:06.743: INFO: Pod "pod-f95ebad6-698b-4290-9059-f854ffb51f8f" satisfied condition "success or failure"
Feb 15 00:00:06.746: INFO: Trying to get logs from node jerma-node pod pod-f95ebad6-698b-4290-9059-f854ffb51f8f container test-container: 
STEP: delete the pod
Feb 15 00:00:06.792: INFO: Waiting for pod pod-f95ebad6-698b-4290-9059-f854ffb51f8f to disappear
Feb 15 00:00:06.811: INFO: Pod pod-f95ebad6-698b-4290-9059-f854ffb51f8f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:00:06.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2653" for this suite.

• [SLOW TEST:8.256 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":40,"skipped":685,"failed":0}
SSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:00:06.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-84dfd2e8-48d1-4489-ad19-1025c503cd40 in namespace container-probe-1308
Feb 15 00:00:15.271: INFO: Started pod liveness-84dfd2e8-48d1-4489-ad19-1025c503cd40 in namespace container-probe-1308
STEP: checking the pod's current state and verifying that restartCount is present
Feb 15 00:00:15.277: INFO: Initial restart count of pod liveness-84dfd2e8-48d1-4489-ad19-1025c503cd40 is 0
Feb 15 00:00:39.404: INFO: Restart count of pod container-probe-1308/liveness-84dfd2e8-48d1-4489-ad19-1025c503cd40 is now 1 (24.127164517s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:00:39.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1308" for this suite.

• [SLOW TEST:32.654 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":41,"skipped":691,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:00:39.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 00:00:39.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb 15 00:00:42.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1660 create -f -'
Feb 15 00:00:46.955: INFO: stderr: ""
Feb 15 00:00:46.955: INFO: stdout: "e2e-test-crd-publish-openapi-6774-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Feb 15 00:00:46.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1660 delete e2e-test-crd-publish-openapi-6774-crds test-cr'
Feb 15 00:00:47.116: INFO: stderr: ""
Feb 15 00:00:47.116: INFO: stdout: "e2e-test-crd-publish-openapi-6774-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Feb 15 00:00:47.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1660 apply -f -'
Feb 15 00:00:48.044: INFO: stderr: ""
Feb 15 00:00:48.044: INFO: stdout: "e2e-test-crd-publish-openapi-6774-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Feb 15 00:00:48.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1660 delete e2e-test-crd-publish-openapi-6774-crds test-cr'
Feb 15 00:00:48.169: INFO: stderr: ""
Feb 15 00:00:48.170: INFO: stdout: "e2e-test-crd-publish-openapi-6774-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Feb 15 00:00:48.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6774-crds'
Feb 15 00:00:48.787: INFO: stderr: ""
Feb 15 00:00:48.788: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6774-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:00:52.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1660" for this suite.

• [SLOW TEST:13.180 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":280,"completed":42,"skipped":699,"failed":0}
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:00:52.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-4098
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a new StatefulSet
Feb 15 00:00:52.824: INFO: Found 0 stateful pods, waiting for 3
Feb 15 00:01:02.831: INFO: Found 2 stateful pods, waiting for 3
Feb 15 00:01:12.841: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 00:01:12.842: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 00:01:12.842: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 15 00:01:22.837: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 00:01:22.837: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 00:01:22.838: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 00:01:22.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4098 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 15 00:01:23.332: INFO: stderr: "I0215 00:01:23.122612    1020 log.go:172] (0xc0009dcf20) (0xc000b14320) Create stream\nI0215 00:01:23.122825    1020 log.go:172] (0xc0009dcf20) (0xc000b14320) Stream added, broadcasting: 1\nI0215 00:01:23.126935    1020 log.go:172] (0xc0009dcf20) Reply frame received for 1\nI0215 00:01:23.127000    1020 log.go:172] (0xc0009dcf20) (0xc000b0a6e0) Create stream\nI0215 00:01:23.127022    1020 log.go:172] (0xc0009dcf20) (0xc000b0a6e0) Stream added, broadcasting: 3\nI0215 00:01:23.128988    1020 log.go:172] (0xc0009dcf20) Reply frame received for 3\nI0215 00:01:23.129117    1020 log.go:172] (0xc0009dcf20) (0xc0008dc000) Create stream\nI0215 00:01:23.129129    1020 log.go:172] (0xc0009dcf20) (0xc0008dc000) Stream added, broadcasting: 5\nI0215 00:01:23.130754    1020 log.go:172] (0xc0009dcf20) Reply frame received for 5\nI0215 00:01:23.188496    1020 log.go:172] (0xc0009dcf20) Data frame received for 5\nI0215 00:01:23.188683    1020 log.go:172] (0xc0008dc000) (5) Data frame handling\nI0215 00:01:23.188703    1020 log.go:172] (0xc0008dc000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0215 00:01:23.231142    1020 log.go:172] (0xc0009dcf20) Data frame received for 3\nI0215 00:01:23.231161    1020 log.go:172] (0xc000b0a6e0) (3) Data frame handling\nI0215 00:01:23.231179    1020 log.go:172] (0xc000b0a6e0) (3) Data frame sent\nI0215 00:01:23.316275    1020 log.go:172] (0xc0009dcf20) Data frame received for 1\nI0215 00:01:23.316430    1020 log.go:172] (0xc0009dcf20) (0xc000b0a6e0) Stream removed, broadcasting: 3\nI0215 00:01:23.316520    1020 log.go:172] (0xc000b14320) (1) Data frame handling\nI0215 00:01:23.316542    1020 log.go:172] (0xc000b14320) (1) Data frame sent\nI0215 00:01:23.316656    1020 log.go:172] (0xc0009dcf20) (0xc0008dc000) Stream removed, broadcasting: 5\nI0215 00:01:23.316701    1020 log.go:172] (0xc0009dcf20) (0xc000b14320) Stream removed, broadcasting: 1\nI0215 00:01:23.316723    1020 log.go:172] (0xc0009dcf20) Go away received\nI0215 00:01:23.318003    1020 log.go:172] (0xc0009dcf20) (0xc000b14320) Stream removed, broadcasting: 1\nI0215 00:01:23.318019    1020 log.go:172] (0xc0009dcf20) (0xc000b0a6e0) Stream removed, broadcasting: 3\nI0215 00:01:23.318229    1020 log.go:172] (0xc0009dcf20) (0xc0008dc000) Stream removed, broadcasting: 5\n"
Feb 15 00:01:23.332: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 15 00:01:23.333: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb 15 00:01:33.410: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 15 00:01:43.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4098 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:01:43.881: INFO: stderr: "I0215 00:01:43.659175    1040 log.go:172] (0xc000a7e840) (0xc0009a6320) Create stream\nI0215 00:01:43.659320    1040 log.go:172] (0xc000a7e840) (0xc0009a6320) Stream added, broadcasting: 1\nI0215 00:01:43.676616    1040 log.go:172] (0xc000a7e840) Reply frame received for 1\nI0215 00:01:43.676748    1040 log.go:172] (0xc000a7e840) (0xc0005de820) Create stream\nI0215 00:01:43.676781    1040 log.go:172] (0xc000a7e840) (0xc0005de820) Stream added, broadcasting: 3\nI0215 00:01:43.679959    1040 log.go:172] (0xc000a7e840) Reply frame received for 3\nI0215 00:01:43.680016    1040 log.go:172] (0xc000a7e840) (0xc0002df4a0) Create stream\nI0215 00:01:43.680031    1040 log.go:172] (0xc000a7e840) (0xc0002df4a0) Stream added, broadcasting: 5\nI0215 00:01:43.682027    1040 log.go:172] (0xc000a7e840) Reply frame received for 5\nI0215 00:01:43.753028    1040 log.go:172] (0xc000a7e840) Data frame received for 3\nI0215 00:01:43.753136    1040 log.go:172] (0xc0005de820) (3) Data frame handling\nI0215 00:01:43.753153    1040 log.go:172] (0xc0005de820) (3) Data frame sent\nI0215 00:01:43.754783    1040 log.go:172] (0xc000a7e840) Data frame received for 5\nI0215 00:01:43.754936    1040 log.go:172] (0xc0002df4a0) (5) Data frame handling\nI0215 00:01:43.755006    1040 log.go:172] (0xc0002df4a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0215 00:01:43.861901    1040 log.go:172] (0xc000a7e840) Data frame received for 1\nI0215 00:01:43.862609    1040 log.go:172] (0xc000a7e840) (0xc0002df4a0) Stream removed, broadcasting: 5\nI0215 00:01:43.862698    1040 log.go:172] (0xc0009a6320) (1) Data frame handling\nI0215 00:01:43.862713    1040 log.go:172] (0xc0009a6320) (1) Data frame sent\nI0215 00:01:43.862757    1040 log.go:172] (0xc000a7e840) (0xc0005de820) Stream removed, broadcasting: 3\nI0215 00:01:43.862806    1040 log.go:172] (0xc000a7e840) (0xc0009a6320) Stream removed, broadcasting: 1\nI0215 00:01:43.862822    1040 log.go:172] (0xc000a7e840) Go away received\nI0215 00:01:43.864029    1040 log.go:172] (0xc000a7e840) (0xc0009a6320) Stream removed, broadcasting: 1\nI0215 00:01:43.864054    1040 log.go:172] (0xc000a7e840) (0xc0005de820) Stream removed, broadcasting: 3\nI0215 00:01:43.864073    1040 log.go:172] (0xc000a7e840) (0xc0002df4a0) Stream removed, broadcasting: 5\n"
Feb 15 00:01:43.881: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 15 00:01:43.881: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 15 00:01:54.005: INFO: Waiting for StatefulSet statefulset-4098/ss2 to complete update
Feb 15 00:01:54.006: INFO: Waiting for Pod statefulset-4098/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 15 00:01:54.006: INFO: Waiting for Pod statefulset-4098/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 15 00:02:04.293: INFO: Waiting for StatefulSet statefulset-4098/ss2 to complete update
Feb 15 00:02:04.294: INFO: Waiting for Pod statefulset-4098/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 15 00:02:14.072: INFO: Waiting for StatefulSet statefulset-4098/ss2 to complete update
Feb 15 00:02:14.073: INFO: Waiting for Pod statefulset-4098/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 15 00:02:24.036: INFO: Waiting for StatefulSet statefulset-4098/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 15 00:02:34.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4098 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 15 00:02:34.576: INFO: stderr: "I0215 00:02:34.281133    1060 log.go:172] (0xc0000f4bb0) (0xc0006cdf40) Create stream\nI0215 00:02:34.281252    1060 log.go:172] (0xc0000f4bb0) (0xc0006cdf40) Stream added, broadcasting: 1\nI0215 00:02:34.283481    1060 log.go:172] (0xc0000f4bb0) Reply frame received for 1\nI0215 00:02:34.283519    1060 log.go:172] (0xc0000f4bb0) (0xc000656820) Create stream\nI0215 00:02:34.283526    1060 log.go:172] (0xc0000f4bb0) (0xc000656820) Stream added, broadcasting: 3\nI0215 00:02:34.284461    1060 log.go:172] (0xc0000f4bb0) Reply frame received for 3\nI0215 00:02:34.284481    1060 log.go:172] (0xc0000f4bb0) (0xc0006c54a0) Create stream\nI0215 00:02:34.284486    1060 log.go:172] (0xc0000f4bb0) (0xc0006c54a0) Stream added, broadcasting: 5\nI0215 00:02:34.285627    1060 log.go:172] (0xc0000f4bb0) Reply frame received for 5\nI0215 00:02:34.357817    1060 log.go:172] (0xc0000f4bb0) Data frame received for 5\nI0215 00:02:34.357870    1060 log.go:172] (0xc0006c54a0) (5) Data frame handling\nI0215 00:02:34.357882    1060 log.go:172] (0xc0006c54a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0215 00:02:34.436763    1060 log.go:172] (0xc0000f4bb0) Data frame received for 3\nI0215 00:02:34.436808    1060 log.go:172] (0xc000656820) (3) Data frame handling\nI0215 00:02:34.436826    1060 log.go:172] (0xc000656820) (3) Data frame sent\nI0215 00:02:34.552578    1060 log.go:172] (0xc0000f4bb0) (0xc000656820) Stream removed, broadcasting: 3\nI0215 00:02:34.553401    1060 log.go:172] (0xc0000f4bb0) Data frame received for 1\nI0215 00:02:34.553457    1060 log.go:172] (0xc0006cdf40) (1) Data frame handling\nI0215 00:02:34.553564    1060 log.go:172] (0xc0006cdf40) (1) Data frame sent\nI0215 00:02:34.553672    1060 log.go:172] (0xc0000f4bb0) (0xc0006c54a0) Stream removed, broadcasting: 5\nI0215 00:02:34.553805    1060 log.go:172] (0xc0000f4bb0) (0xc0006cdf40) Stream removed, broadcasting: 1\nI0215 00:02:34.553852    1060 log.go:172] (0xc0000f4bb0) Go away received\nI0215 00:02:34.555016    1060 log.go:172] (0xc0000f4bb0) (0xc0006cdf40) Stream removed, broadcasting: 1\nI0215 00:02:34.555030    1060 log.go:172] (0xc0000f4bb0) (0xc000656820) Stream removed, broadcasting: 3\nI0215 00:02:34.555039    1060 log.go:172] (0xc0000f4bb0) (0xc0006c54a0) Stream removed, broadcasting: 5\n"
Feb 15 00:02:34.577: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 15 00:02:34.577: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 15 00:02:44.635: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 15 00:02:54.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4098 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:02:55.182: INFO: stderr: "I0215 00:02:54.909831    1080 log.go:172] (0xc000abefd0) (0xc000c603c0) Create stream\nI0215 00:02:54.910089    1080 log.go:172] (0xc000abefd0) (0xc000c603c0) Stream added, broadcasting: 1\nI0215 00:02:54.913792    1080 log.go:172] (0xc000abefd0) Reply frame received for 1\nI0215 00:02:54.913848    1080 log.go:172] (0xc000abefd0) (0xc000ab60a0) Create stream\nI0215 00:02:54.913860    1080 log.go:172] (0xc000abefd0) (0xc000ab60a0) Stream added, broadcasting: 3\nI0215 00:02:54.915493    1080 log.go:172] (0xc000abefd0) Reply frame received for 3\nI0215 00:02:54.915568    1080 log.go:172] (0xc000abefd0) (0xc000ab6140) Create stream\nI0215 00:02:54.915579    1080 log.go:172] (0xc000abefd0) (0xc000ab6140) Stream added, broadcasting: 5\nI0215 00:02:54.917300    1080 log.go:172] (0xc000abefd0) Reply frame received for 5\nI0215 00:02:55.018472    1080 log.go:172] (0xc000abefd0) Data frame received for 5\nI0215 00:02:55.018924    1080 log.go:172] (0xc000ab6140) (5) Data frame handling\nI0215 00:02:55.018958    1080 log.go:172] (0xc000ab6140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0215 00:02:55.019091    1080 log.go:172] (0xc000abefd0) Data frame received for 3\nI0215 00:02:55.019108    1080 log.go:172] (0xc000ab60a0) (3) Data frame handling\nI0215 00:02:55.019128    1080 log.go:172] (0xc000ab60a0) (3) Data frame sent\nI0215 00:02:55.167954    1080 log.go:172] (0xc000abefd0) (0xc000ab60a0) Stream removed, broadcasting: 3\nI0215 00:02:55.168172    1080 log.go:172] (0xc000abefd0) Data frame received for 1\nI0215 00:02:55.168223    1080 log.go:172] (0xc000c603c0) (1) Data frame handling\nI0215 00:02:55.168268    1080 log.go:172] (0xc000c603c0) (1) Data frame sent\nI0215 00:02:55.168299    1080 log.go:172] (0xc000abefd0) (0xc000ab6140) Stream removed, broadcasting: 5\nI0215 00:02:55.168394    1080 log.go:172] (0xc000abefd0) (0xc000c603c0) Stream removed, broadcasting: 1\nI0215 00:02:55.168417    1080 log.go:172] (0xc000abefd0) Go away received\nI0215 00:02:55.169460    1080 log.go:172] (0xc000abefd0) (0xc000c603c0) Stream removed, broadcasting: 1\nI0215 00:02:55.169478    1080 log.go:172] (0xc000abefd0) (0xc000ab60a0) Stream removed, broadcasting: 3\nI0215 00:02:55.169632    1080 log.go:172] (0xc000abefd0) (0xc000ab6140) Stream removed, broadcasting: 5\n"
Feb 15 00:02:55.183: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 15 00:02:55.183: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 15 00:03:05.244: INFO: Waiting for StatefulSet statefulset-4098/ss2 to complete update
Feb 15 00:03:05.244: INFO: Waiting for Pod statefulset-4098/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 15 00:03:05.244: INFO: Waiting for Pod statefulset-4098/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 15 00:03:15.258: INFO: Waiting for StatefulSet statefulset-4098/ss2 to complete update
Feb 15 00:03:15.258: INFO: Waiting for Pod statefulset-4098/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 15 00:03:15.258: INFO: Waiting for Pod statefulset-4098/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 15 00:03:25.256: INFO: Waiting for StatefulSet statefulset-4098/ss2 to complete update
Feb 15 00:03:25.256: INFO: Waiting for Pod statefulset-4098/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 15 00:03:35.261: INFO: Waiting for StatefulSet statefulset-4098/ss2 to complete update
Feb 15 00:03:35.261: INFO: Waiting for Pod statefulset-4098/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 15 00:03:45.259: INFO: Waiting for StatefulSet statefulset-4098/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 15 00:03:55.261: INFO: Deleting all statefulset in ns statefulset-4098
Feb 15 00:03:55.267: INFO: Scaling statefulset ss2 to 0
Feb 15 00:04:25.300: INFO: Waiting for statefulset status.replicas updated to 0
Feb 15 00:04:25.307: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:04:25.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4098" for this suite.

• [SLOW TEST:212.688 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":280,"completed":43,"skipped":701,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:04:25.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 15 00:04:26.298: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 15 00:04:28.317: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321866, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321866, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321866, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321866, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:04:30.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321866, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321866, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321866, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321866, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:04:32.362: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321866, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321866, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321866, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321866, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:04:34.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321866, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321866, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321866, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321866, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 15 00:04:37.371: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Feb 15 00:04:37.408: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:04:37.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3413" for this suite.
STEP: Destroying namespace "webhook-3413-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.222 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":280,"completed":44,"skipped":707,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:04:37.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 15 00:04:38.422: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 15 00:04:40.436: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321878, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321878, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321878, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321878, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:04:42.444: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321878, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321878, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321878, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321878, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:04:44.448: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321878, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321878, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321878, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717321878, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 15 00:04:47.494: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 00:04:47.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2417-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:04:48.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6578" for this suite.
STEP: Destroying namespace "webhook-6578-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:11.285 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":280,"completed":45,"skipped":708,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:04:48.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:05:00.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8136" for this suite.

• [SLOW TEST:11.259 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":280,"completed":46,"skipped":715,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:05:00.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-wmxfd in namespace proxy-2186
I0215 00:05:00.267985      10 runners.go:189] Created replication controller with name: proxy-service-wmxfd, namespace: proxy-2186, replica count: 1
I0215 00:05:01.319868      10 runners.go:189] proxy-service-wmxfd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 00:05:02.320848      10 runners.go:189] proxy-service-wmxfd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 00:05:03.322264      10 runners.go:189] proxy-service-wmxfd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 00:05:04.323038      10 runners.go:189] proxy-service-wmxfd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 00:05:05.323809      10 runners.go:189] proxy-service-wmxfd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 00:05:06.325087      10 runners.go:189] proxy-service-wmxfd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 00:05:07.326100      10 runners.go:189] proxy-service-wmxfd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 00:05:08.326854      10 runners.go:189] proxy-service-wmxfd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 00:05:09.327562      10 runners.go:189] proxy-service-wmxfd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0215 00:05:10.328248      10 runners.go:189] proxy-service-wmxfd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0215 00:05:11.329001      10 runners.go:189] proxy-service-wmxfd Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 15 00:05:11.680: INFO: setup took 11.477421434s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 15 00:05:11.711: INFO: (0) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname2/proxy/: bar (200; 28.753216ms)
Feb 15 00:05:11.712: INFO: (0) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 31.034134ms)
Feb 15 00:05:11.713: INFO: (0) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 30.113868ms)
Feb 15 00:05:11.713: INFO: (0) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 31.95211ms)
Feb 15 00:05:11.713: INFO: (0) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:1080/proxy/: test<... (200; 31.562218ms)
Feb 15 00:05:11.714: INFO: (0) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:1080/proxy/: ... (200; 32.459511ms)
Feb 15 00:05:11.715: INFO: (0) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd/proxy/: test (200; 32.460842ms)
Feb 15 00:05:11.720: INFO: (0) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 35.603149ms)
Feb 15 00:05:11.721: INFO: (0) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname1/proxy/: foo (200; 40.681275ms)
Feb 15 00:05:11.721: INFO: (0) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname1/proxy/: foo (200; 40.152183ms)
Feb 15 00:05:11.721: INFO: (0) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname2/proxy/: bar (200; 40.671876ms)
Feb 15 00:05:11.725: INFO: (0) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:443/proxy/: ... (200; 16.287645ms)
Feb 15 00:05:11.746: INFO: (1) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 16.681121ms)
Feb 15 00:05:11.748: INFO: (1) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 18.880836ms)
Feb 15 00:05:11.749: INFO: (1) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:460/proxy/: tls baz (200; 18.845774ms)
Feb 15 00:05:11.749: INFO: (1) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 19.113752ms)
Feb 15 00:05:11.751: INFO: (1) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 21.443536ms)
Feb 15 00:05:11.754: INFO: (1) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname2/proxy/: bar (200; 23.563648ms)
Feb 15 00:05:11.754: INFO: (1) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd/proxy/: test (200; 24.596487ms)
Feb 15 00:05:11.756: INFO: (1) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:462/proxy/: tls qux (200; 26.118682ms)
Feb 15 00:05:11.760: INFO: (1) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname2/proxy/: tls qux (200; 30.591746ms)
Feb 15 00:05:11.761: INFO: (1) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname1/proxy/: foo (200; 30.843769ms)
Feb 15 00:05:11.769: INFO: (1) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname2/proxy/: bar (200; 39.757069ms)
Feb 15 00:05:11.772: INFO: (1) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname1/proxy/: tls baz (200; 41.750759ms)
Feb 15 00:05:11.772: INFO: (1) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname1/proxy/: foo (200; 42.665743ms)
Feb 15 00:05:11.772: INFO: (1) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:443/proxy/: test<... (200; 43.056832ms)
Feb 15 00:05:11.782: INFO: (2) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd/proxy/: test (200; 9.125128ms)
Feb 15 00:05:11.796: INFO: (2) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 23.19809ms)
Feb 15 00:05:11.796: INFO: (2) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname1/proxy/: foo (200; 23.427806ms)
Feb 15 00:05:11.796: INFO: (2) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:462/proxy/: tls qux (200; 22.798068ms)
Feb 15 00:05:11.797: INFO: (2) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 22.727665ms)
Feb 15 00:05:11.799: INFO: (2) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:1080/proxy/: test<... (200; 25.139178ms)
Feb 15 00:05:11.800: INFO: (2) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:1080/proxy/: ... (200; 26.596109ms)
Feb 15 00:05:11.804: INFO: (2) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 29.718115ms)
Feb 15 00:05:11.805: INFO: (2) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname1/proxy/: tls baz (200; 30.703657ms)
Feb 15 00:05:11.805: INFO: (2) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname1/proxy/: foo (200; 30.879977ms)
Feb 15 00:05:11.805: INFO: (2) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname2/proxy/: bar (200; 32.105524ms)
Feb 15 00:05:11.805: INFO: (2) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname2/proxy/: tls qux (200; 32.19533ms)
Feb 15 00:05:11.806: INFO: (2) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:460/proxy/: tls baz (200; 32.142253ms)
Feb 15 00:05:11.811: INFO: (2) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname2/proxy/: bar (200; 37.794559ms)
Feb 15 00:05:11.812: INFO: (2) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 38.150457ms)
Feb 15 00:05:11.812: INFO: (2) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:443/proxy/: ... (200; 29.032075ms)
Feb 15 00:05:11.843: INFO: (3) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 30.467295ms)
Feb 15 00:05:11.843: INFO: (3) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 30.588183ms)
Feb 15 00:05:11.843: INFO: (3) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 30.74307ms)
Feb 15 00:05:11.843: INFO: (3) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 31.100265ms)
Feb 15 00:05:11.844: INFO: (3) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:1080/proxy/: test<... (200; 31.223077ms)
Feb 15 00:05:11.846: INFO: (3) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd/proxy/: test (200; 33.625906ms)
Feb 15 00:05:11.859: INFO: (4) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 12.262759ms)
Feb 15 00:05:11.865: INFO: (4) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:462/proxy/: tls qux (200; 17.812607ms)
Feb 15 00:05:11.866: INFO: (4) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:443/proxy/: test<... (200; 23.667482ms)
Feb 15 00:05:11.871: INFO: (4) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname2/proxy/: bar (200; 24.352729ms)
Feb 15 00:05:11.871: INFO: (4) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname2/proxy/: tls qux (200; 25.46144ms)
Feb 15 00:05:11.872: INFO: (4) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd/proxy/: test (200; 24.314489ms)
Feb 15 00:05:11.872: INFO: (4) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 24.611368ms)
Feb 15 00:05:11.872: INFO: (4) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname1/proxy/: foo (200; 25.183807ms)
Feb 15 00:05:11.872: INFO: (4) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:1080/proxy/: ... (200; 24.748554ms)
Feb 15 00:05:11.874: INFO: (4) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname1/proxy/: foo (200; 27.000982ms)
Feb 15 00:05:11.875: INFO: (4) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname2/proxy/: bar (200; 27.961897ms)
Feb 15 00:05:11.885: INFO: (5) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 10.031498ms)
Feb 15 00:05:11.885: INFO: (5) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 9.63188ms)
Feb 15 00:05:11.886: INFO: (5) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:443/proxy/: test<... (200; 14.039924ms)
Feb 15 00:05:11.892: INFO: (5) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname2/proxy/: tls qux (200; 15.822912ms)
Feb 15 00:05:11.892: INFO: (5) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:462/proxy/: tls qux (200; 16.571778ms)
Feb 15 00:05:11.892: INFO: (5) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname2/proxy/: bar (200; 16.716135ms)
Feb 15 00:05:11.893: INFO: (5) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 17.388928ms)
Feb 15 00:05:11.894: INFO: (5) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 17.692289ms)
Feb 15 00:05:11.894: INFO: (5) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd/proxy/: test (200; 17.664854ms)
Feb 15 00:05:11.894: INFO: (5) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname1/proxy/: foo (200; 18.654918ms)
Feb 15 00:05:11.894: INFO: (5) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:460/proxy/: tls baz (200; 18.081876ms)
Feb 15 00:05:11.894: INFO: (5) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname1/proxy/: tls baz (200; 18.941513ms)
Feb 15 00:05:11.895: INFO: (5) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname2/proxy/: bar (200; 18.811623ms)
Feb 15 00:05:11.895: INFO: (5) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:1080/proxy/: ... (200; 18.924927ms)
Feb 15 00:05:11.897: INFO: (5) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname1/proxy/: foo (200; 21.070841ms)
Feb 15 00:05:11.908: INFO: (6) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:1080/proxy/: test<... (200; 10.092398ms)
Feb 15 00:05:11.908: INFO: (6) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:460/proxy/: tls baz (200; 9.989731ms)
Feb 15 00:05:11.918: INFO: (6) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:1080/proxy/: ... (200; 19.934062ms)
Feb 15 00:05:11.918: INFO: (6) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname1/proxy/: tls baz (200; 20.491126ms)
Feb 15 00:05:11.919: INFO: (6) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname2/proxy/: bar (200; 21.258084ms)
Feb 15 00:05:11.919: INFO: (6) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 21.674043ms)
Feb 15 00:05:11.919: INFO: (6) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:443/proxy/: test (200; 22.438186ms)
Feb 15 00:05:11.920: INFO: (6) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 22.357645ms)
Feb 15 00:05:11.920: INFO: (6) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname2/proxy/: tls qux (200; 22.342045ms)
Feb 15 00:05:11.921: INFO: (6) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname2/proxy/: bar (200; 23.207023ms)
Feb 15 00:05:11.921: INFO: (6) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:462/proxy/: tls qux (200; 23.075365ms)
Feb 15 00:05:11.928: INFO: (7) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd/proxy/: test (200; 6.465009ms)
Feb 15 00:05:11.930: INFO: (7) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:1080/proxy/: ... (200; 8.15737ms)
Feb 15 00:05:11.930: INFO: (7) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 8.693385ms)
Feb 15 00:05:11.934: INFO: (7) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname1/proxy/: tls baz (200; 13.046781ms)
Feb 15 00:05:11.935: INFO: (7) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname1/proxy/: foo (200; 13.435072ms)
Feb 15 00:05:11.935: INFO: (7) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname2/proxy/: bar (200; 13.343256ms)
Feb 15 00:05:11.936: INFO: (7) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname2/proxy/: tls qux (200; 14.261077ms)
Feb 15 00:05:11.936: INFO: (7) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:443/proxy/: test<... (200; 14.479484ms)
Feb 15 00:05:11.937: INFO: (7) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 15.87509ms)
Feb 15 00:05:11.938: INFO: (7) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 16.17128ms)
Feb 15 00:05:11.938: INFO: (7) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:460/proxy/: tls baz (200; 16.594477ms)
Feb 15 00:05:11.939: INFO: (7) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:462/proxy/: tls qux (200; 17.577508ms)
Feb 15 00:05:11.939: INFO: (7) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname1/proxy/: foo (200; 18.256228ms)
Feb 15 00:05:11.947: INFO: (8) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd/proxy/: test (200; 6.983348ms)
Feb 15 00:05:11.947: INFO: (8) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 7.374644ms)
Feb 15 00:05:11.947: INFO: (8) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:460/proxy/: tls baz (200; 7.439037ms)
Feb 15 00:05:11.948: INFO: (8) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:462/proxy/: tls qux (200; 7.76875ms)
Feb 15 00:05:11.948: INFO: (8) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:443/proxy/: test<... (200; 10.660278ms)
Feb 15 00:05:11.952: INFO: (8) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname2/proxy/: tls qux (200; 12.632143ms)
Feb 15 00:05:11.952: INFO: (8) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 12.262733ms)
Feb 15 00:05:11.952: INFO: (8) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname1/proxy/: foo (200; 12.454874ms)
Feb 15 00:05:11.952: INFO: (8) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 12.605044ms)
Feb 15 00:05:11.952: INFO: (8) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname1/proxy/: tls baz (200; 12.655852ms)
Feb 15 00:05:11.953: INFO: (8) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname2/proxy/: bar (200; 13.172615ms)
Feb 15 00:05:11.953: INFO: (8) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname1/proxy/: foo (200; 13.145313ms)
Feb 15 00:05:11.953: INFO: (8) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:1080/proxy/: ... (200; 13.120958ms)
Feb 15 00:05:11.953: INFO: (8) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 13.139368ms)
Feb 15 00:05:11.953: INFO: (8) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname2/proxy/: bar (200; 13.724071ms)
Feb 15 00:05:11.959: INFO: (9) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 5.68797ms)
Feb 15 00:05:11.960: INFO: (9) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd/proxy/: test (200; 6.366549ms)
Feb 15 00:05:11.960: INFO: (9) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 6.822513ms)
Feb 15 00:05:11.960: INFO: (9) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:443/proxy/: test<... (200; 6.887446ms)
Feb 15 00:05:11.960: INFO: (9) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 6.870446ms)
Feb 15 00:05:11.960: INFO: (9) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:1080/proxy/: ... (200; 6.965439ms)
Feb 15 00:05:11.960: INFO: (9) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:462/proxy/: tls qux (200; 7.202118ms)
Feb 15 00:05:11.961: INFO: (9) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:460/proxy/: tls baz (200; 6.958606ms)
Feb 15 00:05:11.961: INFO: (9) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 7.10278ms)
Feb 15 00:05:11.963: INFO: (9) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname2/proxy/: tls qux (200; 10.155305ms)
Feb 15 00:05:11.964: INFO: (9) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname1/proxy/: foo (200; 10.312007ms)
Feb 15 00:05:11.964: INFO: (9) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname1/proxy/: tls baz (200; 10.402901ms)
Feb 15 00:05:11.964: INFO: (9) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname1/proxy/: foo (200; 10.26574ms)
Feb 15 00:05:11.964: INFO: (9) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname2/proxy/: bar (200; 10.415622ms)
Feb 15 00:05:11.965: INFO: (9) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname2/proxy/: bar (200; 11.704561ms)
Feb 15 00:05:11.979: INFO: (10) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname1/proxy/: foo (200; 13.520897ms)
Feb 15 00:05:11.979: INFO: (10) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname2/proxy/: bar (200; 13.732843ms)
Feb 15 00:05:11.979: INFO: (10) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 13.798824ms)
Feb 15 00:05:11.979: INFO: (10) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:443/proxy/: test<... (200; 19.09022ms)
Feb 15 00:05:11.985: INFO: (10) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd/proxy/: test (200; 19.030873ms)
Feb 15 00:05:11.984: INFO: (10) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:1080/proxy/: ... (200; 19.001311ms)
Feb 15 00:05:11.984: INFO: (10) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname1/proxy/: foo (200; 18.898704ms)
Feb 15 00:05:11.995: INFO: (11) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 10.228238ms)
Feb 15 00:05:11.999: INFO: (11) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:1080/proxy/: test<... (200; 13.821932ms)
Feb 15 00:05:11.999: INFO: (11) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:462/proxy/: tls qux (200; 14.190117ms)
Feb 15 00:05:11.999: INFO: (11) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname2/proxy/: tls qux (200; 14.201794ms)
Feb 15 00:05:12.000: INFO: (11) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 14.299574ms)
Feb 15 00:05:12.000: INFO: (11) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:460/proxy/: tls baz (200; 14.709731ms)
Feb 15 00:05:12.000: INFO: (11) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname1/proxy/: tls baz (200; 15.211447ms)
Feb 15 00:05:12.000: INFO: (11) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 15.500763ms)
Feb 15 00:05:12.001: INFO: (11) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 15.875185ms)
Feb 15 00:05:12.001: INFO: (11) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:1080/proxy/: ... (200; 15.718942ms)
Feb 15 00:05:12.001: INFO: (11) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname2/proxy/: bar (200; 16.245762ms)
Feb 15 00:05:12.001: INFO: (11) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:443/proxy/: test (200; 20.278262ms)
Feb 15 00:05:12.024: INFO: (12) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname1/proxy/: foo (200; 17.843846ms)
Feb 15 00:05:12.025: INFO: (12) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:1080/proxy/: test<... (200; 18.634407ms)
Feb 15 00:05:12.025: INFO: (12) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:443/proxy/: ... (200; 22.175791ms)
Feb 15 00:05:12.028: INFO: (12) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 21.391652ms)
Feb 15 00:05:12.028: INFO: (12) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 21.838052ms)
Feb 15 00:05:12.028: INFO: (12) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd/proxy/: test (200; 21.986124ms)
Feb 15 00:05:12.028: INFO: (12) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 21.496671ms)
Feb 15 00:05:12.029: INFO: (12) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:462/proxy/: tls qux (200; 22.129762ms)
Feb 15 00:05:12.032: INFO: (12) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname2/proxy/: bar (200; 25.82045ms)
Feb 15 00:05:12.032: INFO: (12) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname2/proxy/: tls qux (200; 25.829667ms)
Feb 15 00:05:12.032: INFO: (12) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname1/proxy/: tls baz (200; 25.893414ms)
Feb 15 00:05:12.034: INFO: (12) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname1/proxy/: foo (200; 27.516813ms)
Feb 15 00:05:12.044: INFO: (13) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname2/proxy/: bar (200; 10.253958ms)
Feb 15 00:05:12.045: INFO: (13) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 10.31946ms)
Feb 15 00:05:12.045: INFO: (13) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname2/proxy/: tls qux (200; 10.282538ms)
Feb 15 00:05:12.045: INFO: (13) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:462/proxy/: tls qux (200; 10.848579ms)
Feb 15 00:05:12.045: INFO: (13) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname1/proxy/: foo (200; 11.052872ms)
Feb 15 00:05:12.046: INFO: (13) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname1/proxy/: foo (200; 11.998073ms)
Feb 15 00:05:12.046: INFO: (13) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname1/proxy/: tls baz (200; 11.588595ms)
Feb 15 00:05:12.046: INFO: (13) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname2/proxy/: bar (200; 12.221229ms)
Feb 15 00:05:12.047: INFO: (13) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:1080/proxy/: test<... (200; 12.589363ms)
Feb 15 00:05:12.047: INFO: (13) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 12.217068ms)
Feb 15 00:05:12.047: INFO: (13) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:443/proxy/: test (200; 14.061446ms)
Feb 15 00:05:12.048: INFO: (13) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 13.667131ms)
Feb 15 00:05:12.048: INFO: (13) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:1080/proxy/: ... (200; 13.976335ms)
Feb 15 00:05:12.048: INFO: (13) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 14.150788ms)
Feb 15 00:05:12.059: INFO: (14) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname1/proxy/: foo (200; 10.38412ms)
Feb 15 00:05:12.059: INFO: (14) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 10.6745ms)
Feb 15 00:05:12.060: INFO: (14) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname1/proxy/: tls baz (200; 11.287965ms)
Feb 15 00:05:12.060: INFO: (14) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 11.732448ms)
Feb 15 00:05:12.060: INFO: (14) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname2/proxy/: bar (200; 11.36664ms)
Feb 15 00:05:12.060: INFO: (14) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:1080/proxy/: ... (200; 12.218079ms)
Feb 15 00:05:12.061: INFO: (14) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:1080/proxy/: test<... (200; 12.156081ms)
Feb 15 00:05:12.061: INFO: (14) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:443/proxy/: test (200; 16.201313ms)
Feb 15 00:05:12.074: INFO: (15) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 8.787256ms)
Feb 15 00:05:12.074: INFO: (15) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:443/proxy/: test (200; 11.072711ms)
Feb 15 00:05:12.076: INFO: (15) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:460/proxy/: tls baz (200; 11.334453ms)
Feb 15 00:05:12.077: INFO: (15) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname2/proxy/: bar (200; 11.840423ms)
Feb 15 00:05:12.077: INFO: (15) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:1080/proxy/: ... (200; 12.248801ms)
Feb 15 00:05:12.077: INFO: (15) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 12.156937ms)
Feb 15 00:05:12.077: INFO: (15) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:1080/proxy/: test<... (200; 12.432569ms)
Feb 15 00:05:12.077: INFO: (15) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 12.324128ms)
Feb 15 00:05:12.080: INFO: (15) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname1/proxy/: foo (200; 15.046929ms)
Feb 15 00:05:12.081: INFO: (15) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname2/proxy/: tls qux (200; 15.892538ms)
Feb 15 00:05:12.081: INFO: (15) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname1/proxy/: tls baz (200; 15.527311ms)
Feb 15 00:05:12.086: INFO: (16) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:1080/proxy/: ... (200; 5.196309ms)
Feb 15 00:05:12.086: INFO: (16) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 4.722429ms)
Feb 15 00:05:12.086: INFO: (16) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 5.388948ms)
Feb 15 00:05:12.087: INFO: (16) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:1080/proxy/: test<... (200; 6.399188ms)
Feb 15 00:05:12.088: INFO: (16) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 6.99212ms)
Feb 15 00:05:12.088: INFO: (16) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:462/proxy/: tls qux (200; 6.825828ms)
Feb 15 00:05:12.088: INFO: (16) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:443/proxy/: test (200; 7.214751ms)
Feb 15 00:05:12.088: INFO: (16) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:460/proxy/: tls baz (200; 7.39048ms)
Feb 15 00:05:12.090: INFO: (16) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname1/proxy/: foo (200; 8.972695ms)
Feb 15 00:05:12.091: INFO: (16) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname2/proxy/: tls qux (200; 9.903528ms)
Feb 15 00:05:12.091: INFO: (16) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 10.132264ms)
Feb 15 00:05:12.093: INFO: (16) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname1/proxy/: tls baz (200; 11.954659ms)
Feb 15 00:05:12.093: INFO: (16) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname2/proxy/: bar (200; 11.998815ms)
Feb 15 00:05:12.093: INFO: (16) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname2/proxy/: bar (200; 12.117969ms)
Feb 15 00:05:12.094: INFO: (16) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname1/proxy/: foo (200; 12.496134ms)
Feb 15 00:05:12.100: INFO: (17) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:443/proxy/: test<... (200; 7.149409ms)
Feb 15 00:05:12.101: INFO: (17) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 7.753534ms)
Feb 15 00:05:12.102: INFO: (17) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:460/proxy/: tls baz (200; 7.491672ms)
Feb 15 00:05:12.102: INFO: (17) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 7.776162ms)
Feb 15 00:05:12.102: INFO: (17) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:462/proxy/: tls qux (200; 8.195202ms)
Feb 15 00:05:12.103: INFO: (17) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:1080/proxy/: ... (200; 9.21939ms)
Feb 15 00:05:12.104: INFO: (17) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd/proxy/: test (200; 9.799541ms)
Feb 15 00:05:12.104: INFO: (17) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 10.110886ms)
Feb 15 00:05:12.105: INFO: (17) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname1/proxy/: foo (200; 11.546706ms)
Feb 15 00:05:12.106: INFO: (17) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname2/proxy/: tls qux (200; 11.86727ms)
Feb 15 00:05:12.106: INFO: (17) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname1/proxy/: tls baz (200; 11.96303ms)
Feb 15 00:05:12.107: INFO: (17) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname2/proxy/: bar (200; 12.854648ms)
Feb 15 00:05:12.107: INFO: (17) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname1/proxy/: foo (200; 13.232233ms)
Feb 15 00:05:12.108: INFO: (17) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname2/proxy/: bar (200; 14.130391ms)
Feb 15 00:05:12.111: INFO: (18) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:1080/proxy/: test<... (200; 3.090428ms)
Feb 15 00:05:12.121: INFO: (18) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:443/proxy/: test (200; 12.83824ms)
Feb 15 00:05:12.121: INFO: (18) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 12.775622ms)
Feb 15 00:05:12.122: INFO: (18) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 13.848591ms)
Feb 15 00:05:12.122: INFO: (18) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 14.109792ms)
Feb 15 00:05:12.125: INFO: (18) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname2/proxy/: bar (200; 16.377998ms)
Feb 15 00:05:12.124: INFO: (18) /api/v1/namespaces/proxy-2186/services/https:proxy-service-wmxfd:tlsportname2/proxy/: tls qux (200; 16.074731ms)
Feb 15 00:05:12.125: INFO: (18) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:460/proxy/: tls baz (200; 16.402355ms)
Feb 15 00:05:12.125: INFO: (18) /api/v1/namespaces/proxy-2186/services/http:proxy-service-wmxfd:portname1/proxy/: foo (200; 16.484013ms)
Feb 15 00:05:12.125: INFO: (18) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:462/proxy/: tls qux (200; 16.399216ms)
Feb 15 00:05:12.125: INFO: (18) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 16.467938ms)
Feb 15 00:05:12.125: INFO: (18) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:1080/proxy/: ... (200; 16.68445ms)
Feb 15 00:05:12.125: INFO: (18) /api/v1/namespaces/proxy-2186/services/proxy-service-wmxfd:portname2/proxy/: bar (200; 17.065678ms)
Feb 15 00:05:12.138: INFO: (19) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:1080/proxy/: ... (200; 12.871858ms)
Feb 15 00:05:12.138: INFO: (19) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 12.83952ms)
Feb 15 00:05:12.139: INFO: (19) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:162/proxy/: bar (200; 13.942205ms)
Feb 15 00:05:12.139: INFO: (19) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd/proxy/: test (200; 13.692157ms)
Feb 15 00:05:12.139: INFO: (19) /api/v1/namespaces/proxy-2186/pods/http:proxy-service-wmxfd-ct8gd:160/proxy/: foo (200; 13.647542ms)
Feb 15 00:05:12.139: INFO: (19) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:462/proxy/: tls qux (200; 13.734175ms)
Feb 15 00:05:12.139: INFO: (19) /api/v1/namespaces/proxy-2186/pods/proxy-service-wmxfd-ct8gd:1080/proxy/: test<... (200; 13.743771ms)
Feb 15 00:05:12.139: INFO: (19) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:460/proxy/: tls baz (200; 13.947687ms)
Feb 15 00:05:12.140: INFO: (19) /api/v1/namespaces/proxy-2186/pods/https:proxy-service-wmxfd-ct8gd:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-downwardapi-fdmk
STEP: Creating a pod to test atomic-volume-subpath
Feb 15 00:05:17.451: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-fdmk" in namespace "subpath-4926" to be "success or failure"
Feb 15 00:05:17.464: INFO: Pod "pod-subpath-test-downwardapi-fdmk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.498607ms
Feb 15 00:05:19.472: INFO: Pod "pod-subpath-test-downwardapi-fdmk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021342335s
Feb 15 00:05:21.480: INFO: Pod "pod-subpath-test-downwardapi-fdmk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028706833s
Feb 15 00:05:23.488: INFO: Pod "pod-subpath-test-downwardapi-fdmk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036439849s
Feb 15 00:05:25.496: INFO: Pod "pod-subpath-test-downwardapi-fdmk": Phase="Running", Reason="", readiness=true. Elapsed: 8.044411022s
Feb 15 00:05:27.502: INFO: Pod "pod-subpath-test-downwardapi-fdmk": Phase="Running", Reason="", readiness=true. Elapsed: 10.050767296s
Feb 15 00:05:29.509: INFO: Pod "pod-subpath-test-downwardapi-fdmk": Phase="Running", Reason="", readiness=true. Elapsed: 12.057901029s
Feb 15 00:05:31.515: INFO: Pod "pod-subpath-test-downwardapi-fdmk": Phase="Running", Reason="", readiness=true. Elapsed: 14.063860293s
Feb 15 00:05:33.524: INFO: Pod "pod-subpath-test-downwardapi-fdmk": Phase="Running", Reason="", readiness=true. Elapsed: 16.073202982s
Feb 15 00:05:35.558: INFO: Pod "pod-subpath-test-downwardapi-fdmk": Phase="Running", Reason="", readiness=true. Elapsed: 18.10728332s
Feb 15 00:05:37.569: INFO: Pod "pod-subpath-test-downwardapi-fdmk": Phase="Running", Reason="", readiness=true. Elapsed: 20.118252811s
Feb 15 00:05:39.575: INFO: Pod "pod-subpath-test-downwardapi-fdmk": Phase="Running", Reason="", readiness=true. Elapsed: 22.124077222s
Feb 15 00:05:41.653: INFO: Pod "pod-subpath-test-downwardapi-fdmk": Phase="Running", Reason="", readiness=true. Elapsed: 24.202267516s
Feb 15 00:05:43.728: INFO: Pod "pod-subpath-test-downwardapi-fdmk": Phase="Running", Reason="", readiness=true. Elapsed: 26.276702743s
Feb 15 00:05:46.680: INFO: Pod "pod-subpath-test-downwardapi-fdmk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.228395801s
STEP: Saw pod success
Feb 15 00:05:46.680: INFO: Pod "pod-subpath-test-downwardapi-fdmk" satisfied condition "success or failure"
Feb 15 00:05:46.739: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-fdmk container test-container-subpath-downwardapi-fdmk: 
STEP: delete the pod
Feb 15 00:05:46.892: INFO: Waiting for pod pod-subpath-test-downwardapi-fdmk to disappear
Feb 15 00:05:46.904: INFO: Pod pod-subpath-test-downwardapi-fdmk no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-fdmk
Feb 15 00:05:46.904: INFO: Deleting pod "pod-subpath-test-downwardapi-fdmk" in namespace "subpath-4926"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:05:46.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4926" for this suite.

• [SLOW TEST:29.670 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":280,"completed":48,"skipped":744,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:05:47.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 15 00:05:47.183: INFO: Waiting up to 5m0s for pod "pod-f61055dd-0c09-49ef-9fb6-6f25b7308c7b" in namespace "emptydir-1441" to be "success or failure"
Feb 15 00:05:47.190: INFO: Pod "pod-f61055dd-0c09-49ef-9fb6-6f25b7308c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.191965ms
Feb 15 00:05:49.197: INFO: Pod "pod-f61055dd-0c09-49ef-9fb6-6f25b7308c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014289351s
Feb 15 00:05:51.203: INFO: Pod "pod-f61055dd-0c09-49ef-9fb6-6f25b7308c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020524723s
Feb 15 00:05:53.209: INFO: Pod "pod-f61055dd-0c09-49ef-9fb6-6f25b7308c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026630602s
Feb 15 00:05:56.106: INFO: Pod "pod-f61055dd-0c09-49ef-9fb6-6f25b7308c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.923168417s
Feb 15 00:05:58.110: INFO: Pod "pod-f61055dd-0c09-49ef-9fb6-6f25b7308c7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.927220806s
STEP: Saw pod success
Feb 15 00:05:58.110: INFO: Pod "pod-f61055dd-0c09-49ef-9fb6-6f25b7308c7b" satisfied condition "success or failure"
Feb 15 00:05:58.113: INFO: Trying to get logs from node jerma-node pod pod-f61055dd-0c09-49ef-9fb6-6f25b7308c7b container test-container: 
STEP: delete the pod
Feb 15 00:05:58.297: INFO: Waiting for pod pod-f61055dd-0c09-49ef-9fb6-6f25b7308c7b to disappear
Feb 15 00:05:58.315: INFO: Pod pod-f61055dd-0c09-49ef-9fb6-6f25b7308c7b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:05:58.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1441" for this suite.

• [SLOW TEST:11.400 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":49,"skipped":754,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:05:58.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Feb 15 00:05:58.610: INFO: >>> kubeConfig: /root/.kube/config
Feb 15 00:06:02.318: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:06:15.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3667" for this suite.

• [SLOW TEST:17.178 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":280,"completed":50,"skipped":760,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:06:15.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Starting the proxy
Feb 15 00:06:15.635: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix069528174/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:06:15.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6164" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":280,"completed":51,"skipped":779,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:06:15.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-2a17cce3-b0b9-4938-a4b8-48376bd81e41
STEP: Creating a pod to test consume secrets
Feb 15 00:06:15.865: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a4fb2c47-cdf8-4406-8be7-45ba416842f2" in namespace "projected-8728" to be "success or failure"
Feb 15 00:06:15.926: INFO: Pod "pod-projected-secrets-a4fb2c47-cdf8-4406-8be7-45ba416842f2": Phase="Pending", Reason="", readiness=false. Elapsed: 60.778801ms
Feb 15 00:06:17.933: INFO: Pod "pod-projected-secrets-a4fb2c47-cdf8-4406-8be7-45ba416842f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068361386s
Feb 15 00:06:19.942: INFO: Pod "pod-projected-secrets-a4fb2c47-cdf8-4406-8be7-45ba416842f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076811868s
Feb 15 00:06:21.950: INFO: Pod "pod-projected-secrets-a4fb2c47-cdf8-4406-8be7-45ba416842f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084937113s
Feb 15 00:06:23.966: INFO: Pod "pod-projected-secrets-a4fb2c47-cdf8-4406-8be7-45ba416842f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.100639272s
STEP: Saw pod success
Feb 15 00:06:23.966: INFO: Pod "pod-projected-secrets-a4fb2c47-cdf8-4406-8be7-45ba416842f2" satisfied condition "success or failure"
Feb 15 00:06:23.969: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-a4fb2c47-cdf8-4406-8be7-45ba416842f2 container projected-secret-volume-test: 
STEP: delete the pod
Feb 15 00:06:24.272: INFO: Waiting for pod pod-projected-secrets-a4fb2c47-cdf8-4406-8be7-45ba416842f2 to disappear
Feb 15 00:06:24.275: INFO: Pod pod-projected-secrets-a4fb2c47-cdf8-4406-8be7-45ba416842f2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:06:24.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8728" for this suite.

• [SLOW TEST:8.493 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":52,"skipped":808,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:06:24.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 00:06:24.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:06:30.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8133" for this suite.

• [SLOW TEST:6.434 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":280,"completed":53,"skipped":821,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:06:30.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-3131
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 15 00:06:30.853: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 15 00:06:30.899: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 00:06:32.909: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 00:06:34.908: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 00:06:37.530: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 00:06:38.972: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 00:06:40.906: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 00:06:42.911: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 00:06:44.909: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 00:06:46.909: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 00:06:48.907: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 00:06:53.310: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 00:06:54.910: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 15 00:06:54.940: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 15 00:07:03.046: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.2:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3131 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 00:07:03.046: INFO: >>> kubeConfig: /root/.kube/config
I0215 00:07:03.100159      10 log.go:172] (0xc001dd0630) (0xc00241c3c0) Create stream
I0215 00:07:03.100339      10 log.go:172] (0xc001dd0630) (0xc00241c3c0) Stream added, broadcasting: 1
I0215 00:07:03.110296      10 log.go:172] (0xc001dd0630) Reply frame received for 1
I0215 00:07:03.110403      10 log.go:172] (0xc001dd0630) (0xc00241c460) Create stream
I0215 00:07:03.110421      10 log.go:172] (0xc001dd0630) (0xc00241c460) Stream added, broadcasting: 3
I0215 00:07:03.112399      10 log.go:172] (0xc001dd0630) Reply frame received for 3
I0215 00:07:03.112525      10 log.go:172] (0xc001dd0630) (0xc00241c500) Create stream
I0215 00:07:03.112575      10 log.go:172] (0xc001dd0630) (0xc00241c500) Stream added, broadcasting: 5
I0215 00:07:03.114446      10 log.go:172] (0xc001dd0630) Reply frame received for 5
I0215 00:07:03.211273      10 log.go:172] (0xc001dd0630) Data frame received for 3
I0215 00:07:03.211404      10 log.go:172] (0xc00241c460) (3) Data frame handling
I0215 00:07:03.211454      10 log.go:172] (0xc00241c460) (3) Data frame sent
I0215 00:07:03.279647      10 log.go:172] (0xc001dd0630) Data frame received for 1
I0215 00:07:03.279753      10 log.go:172] (0xc001dd0630) (0xc00241c500) Stream removed, broadcasting: 5
I0215 00:07:03.279800      10 log.go:172] (0xc00241c3c0) (1) Data frame handling
I0215 00:07:03.279819      10 log.go:172] (0xc00241c3c0) (1) Data frame sent
I0215 00:07:03.279872      10 log.go:172] (0xc001dd0630) (0xc00241c460) Stream removed, broadcasting: 3
I0215 00:07:03.279891      10 log.go:172] (0xc001dd0630) (0xc00241c3c0) Stream removed, broadcasting: 1
I0215 00:07:03.279910      10 log.go:172] (0xc001dd0630) Go away received
I0215 00:07:03.280402      10 log.go:172] (0xc001dd0630) (0xc00241c3c0) Stream removed, broadcasting: 1
I0215 00:07:03.280418      10 log.go:172] (0xc001dd0630) (0xc00241c460) Stream removed, broadcasting: 3
I0215 00:07:03.280426      10 log.go:172] (0xc001dd0630) (0xc00241c500) Stream removed, broadcasting: 5
Feb 15 00:07:03.280: INFO: Found all expected endpoints: [netserver-0]
Feb 15 00:07:03.284: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3131 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 00:07:03.284: INFO: >>> kubeConfig: /root/.kube/config
I0215 00:07:03.320142      10 log.go:172] (0xc002b7f550) (0xc002d4ed20) Create stream
I0215 00:07:03.320247      10 log.go:172] (0xc002b7f550) (0xc002d4ed20) Stream added, broadcasting: 1
I0215 00:07:03.323593      10 log.go:172] (0xc002b7f550) Reply frame received for 1
I0215 00:07:03.323620      10 log.go:172] (0xc002b7f550) (0xc002cabc20) Create stream
I0215 00:07:03.323626      10 log.go:172] (0xc002b7f550) (0xc002cabc20) Stream added, broadcasting: 3
I0215 00:07:03.324557      10 log.go:172] (0xc002b7f550) Reply frame received for 3
I0215 00:07:03.324574      10 log.go:172] (0xc002b7f550) (0xc002d4edc0) Create stream
I0215 00:07:03.324580      10 log.go:172] (0xc002b7f550) (0xc002d4edc0) Stream added, broadcasting: 5
I0215 00:07:03.325796      10 log.go:172] (0xc002b7f550) Reply frame received for 5
I0215 00:07:03.397847      10 log.go:172] (0xc002b7f550) Data frame received for 3
I0215 00:07:03.397931      10 log.go:172] (0xc002cabc20) (3) Data frame handling
I0215 00:07:03.397952      10 log.go:172] (0xc002cabc20) (3) Data frame sent
I0215 00:07:03.461525      10 log.go:172] (0xc002b7f550) Data frame received for 1
I0215 00:07:03.461619      10 log.go:172] (0xc002d4ed20) (1) Data frame handling
I0215 00:07:03.461662      10 log.go:172] (0xc002d4ed20) (1) Data frame sent
I0215 00:07:03.461915      10 log.go:172] (0xc002b7f550) (0xc002d4ed20) Stream removed, broadcasting: 1
I0215 00:07:03.462258      10 log.go:172] (0xc002b7f550) (0xc002d4edc0) Stream removed, broadcasting: 5
I0215 00:07:03.462307      10 log.go:172] (0xc002b7f550) (0xc002cabc20) Stream removed, broadcasting: 3
I0215 00:07:03.462377      10 log.go:172] (0xc002b7f550) (0xc002d4ed20) Stream removed, broadcasting: 1
I0215 00:07:03.462386      10 log.go:172] (0xc002b7f550) (0xc002cabc20) Stream removed, broadcasting: 3
I0215 00:07:03.462393      10 log.go:172] (0xc002b7f550) (0xc002d4edc0) Stream removed, broadcasting: 5
Feb 15 00:07:03.462: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:07:03.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0215 00:07:03.463559      10 log.go:172] (0xc002b7f550) Go away received
STEP: Destroying namespace "pod-network-test-3131" for this suite.

• [SLOW TEST:32.753 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":54,"skipped":840,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:07:03.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name s-test-opt-del-ca4b6523-8eb9-4958-ae5d-5bc977042d78
STEP: Creating secret with name s-test-opt-upd-87e54f19-15ce-4d62-a430-12f004194027
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-ca4b6523-8eb9-4958-ae5d-5bc977042d78
STEP: Updating secret s-test-opt-upd-87e54f19-15ce-4d62-a430-12f004194027
STEP: Creating secret with name s-test-opt-create-9b0178c0-aec8-4bc1-bd69-30c8fc8ab83f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:07:23.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2611" for this suite.

• [SLOW TEST:20.441 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":55,"skipped":887,"failed":0}
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:07:23.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:07:24.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9968" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":280,"completed":56,"skipped":893,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:07:24.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 15 00:07:24.924: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 15 00:07:26.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322044, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322044, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322045, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322044, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:07:29.381: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322044, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322044, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322045, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322044, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:07:30.982: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322044, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322044, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322045, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322044, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:07:33.150: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322044, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322044, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322045, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322044, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 15 00:07:36.039: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Feb 15 00:07:44.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-2834 to-be-attached-pod -i -c=container1'
Feb 15 00:07:44.330: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:07:44.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2834" for this suite.
STEP: Destroying namespace "webhook-2834-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:20.454 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":280,"completed":57,"skipped":905,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:07:44.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 00:07:44.767: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:07:52.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6474" for this suite.

• [SLOW TEST:8.433 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":280,"completed":58,"skipped":925,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:07:53.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 15 00:07:53.483: INFO: Waiting up to 5m0s for pod "downwardapi-volume-261505c7-45f3-46a4-b47b-5058768c28cc" in namespace "projected-8530" to be "success or failure"
Feb 15 00:07:53.496: INFO: Pod "downwardapi-volume-261505c7-45f3-46a4-b47b-5058768c28cc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.498633ms
Feb 15 00:07:55.502: INFO: Pod "downwardapi-volume-261505c7-45f3-46a4-b47b-5058768c28cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019178771s
Feb 15 00:07:57.512: INFO: Pod "downwardapi-volume-261505c7-45f3-46a4-b47b-5058768c28cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028992075s
Feb 15 00:07:59.519: INFO: Pod "downwardapi-volume-261505c7-45f3-46a4-b47b-5058768c28cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036264336s
Feb 15 00:08:01.557: INFO: Pod "downwardapi-volume-261505c7-45f3-46a4-b47b-5058768c28cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074350475s
STEP: Saw pod success
Feb 15 00:08:01.557: INFO: Pod "downwardapi-volume-261505c7-45f3-46a4-b47b-5058768c28cc" satisfied condition "success or failure"
Feb 15 00:08:01.562: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-261505c7-45f3-46a4-b47b-5058768c28cc container client-container: 
STEP: delete the pod
Feb 15 00:08:01.600: INFO: Waiting for pod downwardapi-volume-261505c7-45f3-46a4-b47b-5058768c28cc to disappear
Feb 15 00:08:01.608: INFO: Pod downwardapi-volume-261505c7-45f3-46a4-b47b-5058768c28cc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:08:01.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8530" for this suite.

• [SLOW TEST:8.616 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":59,"skipped":947,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:08:01.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override all
Feb 15 00:08:01.806: INFO: Waiting up to 5m0s for pod "client-containers-6b43869a-3bbb-4500-95b6-323494b73686" in namespace "containers-5849" to be "success or failure"
Feb 15 00:08:01.876: INFO: Pod "client-containers-6b43869a-3bbb-4500-95b6-323494b73686": Phase="Pending", Reason="", readiness=false. Elapsed: 69.578077ms
Feb 15 00:08:03.886: INFO: Pod "client-containers-6b43869a-3bbb-4500-95b6-323494b73686": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079873563s
Feb 15 00:08:05.894: INFO: Pod "client-containers-6b43869a-3bbb-4500-95b6-323494b73686": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087146739s
Feb 15 00:08:07.900: INFO: Pod "client-containers-6b43869a-3bbb-4500-95b6-323494b73686": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093738649s
Feb 15 00:08:09.906: INFO: Pod "client-containers-6b43869a-3bbb-4500-95b6-323494b73686": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.099237885s
STEP: Saw pod success
Feb 15 00:08:09.906: INFO: Pod "client-containers-6b43869a-3bbb-4500-95b6-323494b73686" satisfied condition "success or failure"
Feb 15 00:08:09.910: INFO: Trying to get logs from node jerma-node pod client-containers-6b43869a-3bbb-4500-95b6-323494b73686 container test-container: 
STEP: delete the pod
Feb 15 00:08:09.960: INFO: Waiting for pod client-containers-6b43869a-3bbb-4500-95b6-323494b73686 to disappear
Feb 15 00:08:09.986: INFO: Pod client-containers-6b43869a-3bbb-4500-95b6-323494b73686 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:08:09.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5849" for this suite.

• [SLOW TEST:8.372 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":280,"completed":60,"skipped":957,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:08:09.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 00:08:10.190: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:08:10.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9049" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":280,"completed":61,"skipped":964,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:08:10.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb 15 00:08:21.152: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2619 PodName:pod-sharedvolume-627fd854-9907-48c9-b1b7-4af2922d387e ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 00:08:21.152: INFO: >>> kubeConfig: /root/.kube/config
I0215 00:08:21.208902      10 log.go:172] (0xc002e18000) (0xc002caa640) Create stream
I0215 00:08:21.209174      10 log.go:172] (0xc002e18000) (0xc002caa640) Stream added, broadcasting: 1
I0215 00:08:21.215507      10 log.go:172] (0xc002e18000) Reply frame received for 1
I0215 00:08:21.215550      10 log.go:172] (0xc002e18000) (0xc001e608c0) Create stream
I0215 00:08:21.215561      10 log.go:172] (0xc002e18000) (0xc001e608c0) Stream added, broadcasting: 3
I0215 00:08:21.217429      10 log.go:172] (0xc002e18000) Reply frame received for 3
I0215 00:08:21.217453      10 log.go:172] (0xc002e18000) (0xc002a68b40) Create stream
I0215 00:08:21.217466      10 log.go:172] (0xc002e18000) (0xc002a68b40) Stream added, broadcasting: 5
I0215 00:08:21.220749      10 log.go:172] (0xc002e18000) Reply frame received for 5
I0215 00:08:21.331434      10 log.go:172] (0xc002e18000) Data frame received for 3
I0215 00:08:21.331500      10 log.go:172] (0xc001e608c0) (3) Data frame handling
I0215 00:08:21.331539      10 log.go:172] (0xc001e608c0) (3) Data frame sent
I0215 00:08:21.408462      10 log.go:172] (0xc002e18000) Data frame received for 1
I0215 00:08:21.408605      10 log.go:172] (0xc002e18000) (0xc002a68b40) Stream removed, broadcasting: 5
I0215 00:08:21.408708      10 log.go:172] (0xc002caa640) (1) Data frame handling
I0215 00:08:21.408758      10 log.go:172] (0xc002caa640) (1) Data frame sent
I0215 00:08:21.408804      10 log.go:172] (0xc002e18000) (0xc001e608c0) Stream removed, broadcasting: 3
I0215 00:08:21.408840      10 log.go:172] (0xc002e18000) (0xc002caa640) Stream removed, broadcasting: 1
I0215 00:08:21.408876      10 log.go:172] (0xc002e18000) Go away received
I0215 00:08:21.409240      10 log.go:172] (0xc002e18000) (0xc002caa640) Stream removed, broadcasting: 1
I0215 00:08:21.409260      10 log.go:172] (0xc002e18000) (0xc001e608c0) Stream removed, broadcasting: 3
I0215 00:08:21.409277      10 log.go:172] (0xc002e18000) (0xc002a68b40) Stream removed, broadcasting: 5
Feb 15 00:08:21.409: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:08:21.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2619" for this suite.

• [SLOW TEST:10.436 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":280,"completed":62,"skipped":966,"failed":0}
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:08:21.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 15 00:08:21.644: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 15 00:08:21.671: INFO: Waiting for terminating namespaces to be deleted...
Feb 15 00:08:21.673: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 15 00:08:21.681: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 15 00:08:21.682: INFO: 	Container weave ready: true, restart count 1
Feb 15 00:08:21.682: INFO: 	Container weave-npc ready: true, restart count 0
Feb 15 00:08:21.682: INFO: pod-sharedvolume-627fd854-9907-48c9-b1b7-4af2922d387e from emptydir-2619 started at 2020-02-15 00:08:11 +0000 UTC (2 container statuses recorded)
Feb 15 00:08:21.682: INFO: 	Container busybox-main-container ready: true, restart count 0
Feb 15 00:08:21.682: INFO: 	Container busybox-sub-container ready: false, restart count 0
Feb 15 00:08:21.682: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 15 00:08:21.682: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 15 00:08:21.682: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 15 00:08:21.700: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 15 00:08:21.701: INFO: 	Container kube-controller-manager ready: true, restart count 7
Feb 15 00:08:21.701: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 15 00:08:21.701: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 15 00:08:21.701: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 15 00:08:21.701: INFO: 	Container weave ready: true, restart count 0
Feb 15 00:08:21.701: INFO: 	Container weave-npc ready: true, restart count 0
Feb 15 00:08:21.701: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 15 00:08:21.701: INFO: 	Container kube-scheduler ready: true, restart count 11
Feb 15 00:08:21.701: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 15 00:08:21.701: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 15 00:08:21.701: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 15 00:08:21.701: INFO: 	Container etcd ready: true, restart count 1
Feb 15 00:08:21.701: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 15 00:08:21.701: INFO: 	Container coredns ready: true, restart count 0
Feb 15 00:08:21.701: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 15 00:08:21.701: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f36ac8162b5b0d], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f36ac81746f11f], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:08:22.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6856" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":280,"completed":63,"skipped":971,"failed":0}
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:08:22.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 00:08:22.903: INFO: (0) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.824565ms)
Feb 15 00:08:22.935: INFO: (1) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 31.838787ms)
Feb 15 00:08:22.941: INFO: (2) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.616548ms)
Feb 15 00:08:22.947: INFO: (3) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.138586ms)
Feb 15 00:08:22.952: INFO: (4) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.202723ms)
Feb 15 00:08:22.956: INFO: (5) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.236777ms)
Feb 15 00:08:22.960: INFO: (6) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.008666ms)
Feb 15 00:08:22.965: INFO: (7) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.426534ms)
Feb 15 00:08:22.970: INFO: (8) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.312509ms)
Feb 15 00:08:22.973: INFO: (9) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.703419ms)
Feb 15 00:08:22.977: INFO: (10) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.850483ms)
Feb 15 00:08:22.981: INFO: (11) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.680461ms)
Feb 15 00:08:22.985: INFO: (12) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.981964ms)
Feb 15 00:08:22.989: INFO: (13) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.209958ms)
Feb 15 00:08:22.994: INFO: (14) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.971127ms)
Feb 15 00:08:23.003: INFO: (15) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.070218ms)
Feb 15 00:08:23.009: INFO: (16) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.465674ms)
Feb 15 00:08:23.015: INFO: (17) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.424649ms)
Feb 15 00:08:23.021: INFO: (18) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.70191ms)
Feb 15 00:08:23.027: INFO: (19) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.467291ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:08:23.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1621" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":280,"completed":64,"skipped":973,"failed":0}

------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:08:23.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:08:39.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1988" for this suite.

• [SLOW TEST:16.355 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":280,"completed":65,"skipped":973,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:08:39.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 15 00:08:39.638: INFO: Waiting up to 5m0s for pod "pod-7fe775e3-3bcd-495c-a6e1-258592b398d3" in namespace "emptydir-7919" to be "success or failure"
Feb 15 00:08:39.651: INFO: Pod "pod-7fe775e3-3bcd-495c-a6e1-258592b398d3": Phase="Pending", Reason="", readiness=false. Elapsed: 13.080569ms
Feb 15 00:08:41.662: INFO: Pod "pod-7fe775e3-3bcd-495c-a6e1-258592b398d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024358607s
Feb 15 00:08:43.672: INFO: Pod "pod-7fe775e3-3bcd-495c-a6e1-258592b398d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034516912s
Feb 15 00:08:45.679: INFO: Pod "pod-7fe775e3-3bcd-495c-a6e1-258592b398d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041585434s
Feb 15 00:08:47.688: INFO: Pod "pod-7fe775e3-3bcd-495c-a6e1-258592b398d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050228155s
STEP: Saw pod success
Feb 15 00:08:47.688: INFO: Pod "pod-7fe775e3-3bcd-495c-a6e1-258592b398d3" satisfied condition "success or failure"
Feb 15 00:08:47.692: INFO: Trying to get logs from node jerma-node pod pod-7fe775e3-3bcd-495c-a6e1-258592b398d3 container test-container: 
STEP: delete the pod
Feb 15 00:08:47.731: INFO: Waiting for pod pod-7fe775e3-3bcd-495c-a6e1-258592b398d3 to disappear
Feb 15 00:08:47.808: INFO: Pod pod-7fe775e3-3bcd-495c-a6e1-258592b398d3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:08:47.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7919" for this suite.

• [SLOW TEST:8.423 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":66,"skipped":1018,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:08:47.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 15 00:08:47.936: INFO: Waiting up to 5m0s for pod "pod-66c876af-79fa-4b89-a5a5-15380750383f" in namespace "emptydir-9260" to be "success or failure"
Feb 15 00:08:47.975: INFO: Pod "pod-66c876af-79fa-4b89-a5a5-15380750383f": Phase="Pending", Reason="", readiness=false. Elapsed: 38.376695ms
Feb 15 00:08:49.990: INFO: Pod "pod-66c876af-79fa-4b89-a5a5-15380750383f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053383953s
Feb 15 00:08:51.996: INFO: Pod "pod-66c876af-79fa-4b89-a5a5-15380750383f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059047583s
Feb 15 00:08:54.003: INFO: Pod "pod-66c876af-79fa-4b89-a5a5-15380750383f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066765305s
Feb 15 00:08:56.016: INFO: Pod "pod-66c876af-79fa-4b89-a5a5-15380750383f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079941692s
STEP: Saw pod success
Feb 15 00:08:56.017: INFO: Pod "pod-66c876af-79fa-4b89-a5a5-15380750383f" satisfied condition "success or failure"
Feb 15 00:08:56.023: INFO: Trying to get logs from node jerma-node pod pod-66c876af-79fa-4b89-a5a5-15380750383f container test-container: 
STEP: delete the pod
Feb 15 00:08:56.055: INFO: Waiting for pod pod-66c876af-79fa-4b89-a5a5-15380750383f to disappear
Feb 15 00:08:56.060: INFO: Pod pod-66c876af-79fa-4b89-a5a5-15380750383f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:08:56.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9260" for this suite.

• [SLOW TEST:8.249 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":67,"skipped":1036,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:08:56.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 00:08:56.694: INFO: Create a RollingUpdate DaemonSet
Feb 15 00:08:56.702: INFO: Check that daemon pods launch on every node of the cluster
Feb 15 00:08:56.784: INFO: Number of nodes with available pods: 0
Feb 15 00:08:56.784: INFO: Node jerma-node is running more than one daemon pod
Feb 15 00:08:59.033: INFO: Number of nodes with available pods: 0
Feb 15 00:08:59.033: INFO: Node jerma-node is running more than one daemon pod
Feb 15 00:08:59.799: INFO: Number of nodes with available pods: 0
Feb 15 00:08:59.799: INFO: Node jerma-node is running more than one daemon pod
Feb 15 00:09:00.800: INFO: Number of nodes with available pods: 0
Feb 15 00:09:00.800: INFO: Node jerma-node is running more than one daemon pod
Feb 15 00:09:02.519: INFO: Number of nodes with available pods: 0
Feb 15 00:09:02.519: INFO: Node jerma-node is running more than one daemon pod
Feb 15 00:09:03.601: INFO: Number of nodes with available pods: 0
Feb 15 00:09:03.601: INFO: Node jerma-node is running more than one daemon pod
Feb 15 00:09:04.436: INFO: Number of nodes with available pods: 0
Feb 15 00:09:04.437: INFO: Node jerma-node is running more than one daemon pod
Feb 15 00:09:05.186: INFO: Number of nodes with available pods: 0
Feb 15 00:09:05.186: INFO: Node jerma-node is running more than one daemon pod
Feb 15 00:09:05.801: INFO: Number of nodes with available pods: 1
Feb 15 00:09:05.801: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 15 00:09:06.910: INFO: Number of nodes with available pods: 2
Feb 15 00:09:06.911: INFO: Number of running nodes: 2, number of available pods: 2
Feb 15 00:09:06.911: INFO: Update the DaemonSet to trigger a rollout
Feb 15 00:09:06.939: INFO: Updating DaemonSet daemon-set
Feb 15 00:09:13.991: INFO: Roll back the DaemonSet before rollout is complete
Feb 15 00:09:14.000: INFO: Updating DaemonSet daemon-set
Feb 15 00:09:14.000: INFO: Make sure DaemonSet rollback is complete
Feb 15 00:09:14.253: INFO: Wrong image for pod: daemon-set-4t6bq. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 15 00:09:14.253: INFO: Pod daemon-set-4t6bq is not available
Feb 15 00:09:15.374: INFO: Wrong image for pod: daemon-set-4t6bq. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 15 00:09:15.375: INFO: Pod daemon-set-4t6bq is not available
Feb 15 00:09:16.279: INFO: Wrong image for pod: daemon-set-4t6bq. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 15 00:09:16.279: INFO: Pod daemon-set-4t6bq is not available
Feb 15 00:09:17.295: INFO: Wrong image for pod: daemon-set-4t6bq. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 15 00:09:17.295: INFO: Pod daemon-set-4t6bq is not available
Feb 15 00:09:18.779: INFO: Wrong image for pod: daemon-set-4t6bq. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 15 00:09:18.779: INFO: Pod daemon-set-4t6bq is not available
Feb 15 00:09:19.305: INFO: Wrong image for pod: daemon-set-4t6bq. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 15 00:09:19.305: INFO: Pod daemon-set-4t6bq is not available
Feb 15 00:09:20.269: INFO: Pod daemon-set-5b9s8 is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2012, will wait for the garbage collector to delete the pods
Feb 15 00:09:20.357: INFO: Deleting DaemonSet.extensions daemon-set took: 14.258269ms
Feb 15 00:09:21.258: INFO: Terminating DaemonSet.extensions daemon-set pods took: 901.150411ms
Feb 15 00:09:26.493: INFO: Number of nodes with available pods: 0
Feb 15 00:09:26.494: INFO: Number of running nodes: 0, number of available pods: 0
Feb 15 00:09:26.499: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2012/daemonsets","resourceVersion":"8478090"},"items":null}

Feb 15 00:09:26.502: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2012/pods","resourceVersion":"8478090"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:09:26.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2012" for this suite.

• [SLOW TEST:30.457 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":280,"completed":68,"skipped":1044,"failed":0}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:09:26.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 15 00:09:26.678: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 15 00:09:26.725: INFO: Waiting for terminating namespaces to be deleted...
Feb 15 00:09:26.743: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 15 00:09:26.803: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 15 00:09:26.803: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 15 00:09:26.803: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 15 00:09:26.803: INFO: 	Container weave ready: true, restart count 1
Feb 15 00:09:26.803: INFO: 	Container weave-npc ready: true, restart count 0
Feb 15 00:09:26.803: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 15 00:09:26.815: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 15 00:09:26.815: INFO: 	Container coredns ready: true, restart count 0
Feb 15 00:09:26.815: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 15 00:09:26.815: INFO: 	Container coredns ready: true, restart count 0
Feb 15 00:09:26.815: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 15 00:09:26.815: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 15 00:09:26.815: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 15 00:09:26.815: INFO: 	Container weave ready: true, restart count 0
Feb 15 00:09:26.815: INFO: 	Container weave-npc ready: true, restart count 0
Feb 15 00:09:26.815: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 15 00:09:26.815: INFO: 	Container kube-controller-manager ready: true, restart count 7
Feb 15 00:09:26.815: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 15 00:09:26.815: INFO: 	Container kube-scheduler ready: true, restart count 11
Feb 15 00:09:26.815: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 15 00:09:26.815: INFO: 	Container etcd ready: true, restart count 1
Feb 15 00:09:26.815: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 15 00:09:26.815: INFO: 	Container kube-apiserver ready: true, restart count 1
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: verifying the node has the label node jerma-node
STEP: verifying the node has the label node jerma-server-mvvl6gufaqub
Feb 15 00:09:27.047: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb 15 00:09:27.047: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb 15 00:09:27.047: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Feb 15 00:09:27.047: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub
Feb 15 00:09:27.047: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub
Feb 15 00:09:27.047: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Feb 15 00:09:27.047: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node
Feb 15 00:09:27.047: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb 15 00:09:27.047: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node
Feb 15 00:09:27.047: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub
STEP: Starting Pods to consume most of the cluster CPU.
Feb 15 00:09:27.047: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node
Feb 15 00:09:27.059: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-815695fe-508e-4e1e-85f5-24d506e692cc.15f36ad74c4e97c7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6104/filler-pod-815695fe-508e-4e1e-85f5-24d506e692cc to jerma-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-815695fe-508e-4e1e-85f5-24d506e692cc.15f36ad87c75b567], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-815695fe-508e-4e1e-85f5-24d506e692cc.15f36ad922c602a0], Reason = [Created], Message = [Created container filler-pod-815695fe-508e-4e1e-85f5-24d506e692cc]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-815695fe-508e-4e1e-85f5-24d506e692cc.15f36ad93fa69d08], Reason = [Started], Message = [Started container filler-pod-815695fe-508e-4e1e-85f5-24d506e692cc]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e9bb7fe5-4c7d-455c-9f72-af27a473e3bc.15f36ad753fa024d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6104/filler-pod-e9bb7fe5-4c7d-455c-9f72-af27a473e3bc to jerma-server-mvvl6gufaqub]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e9bb7fe5-4c7d-455c-9f72-af27a473e3bc.15f36ad87d69df6d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e9bb7fe5-4c7d-455c-9f72-af27a473e3bc.15f36ad9552a25ff], Reason = [Created], Message = [Created container filler-pod-e9bb7fe5-4c7d-455c-9f72-af27a473e3bc]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e9bb7fe5-4c7d-455c-9f72-af27a473e3bc.15f36ad97b744a9f], Reason = [Started], Message = [Started container filler-pod-e9bb7fe5-4c7d-455c-9f72-af27a473e3bc]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f36ada2270ebc7], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f36ada2609a8f1], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-server-mvvl6gufaqub
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:09:40.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6104" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:13.955 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":280,"completed":69,"skipped":1045,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:09:40.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 15 00:09:41.196: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 15 00:09:43.253: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:09:45.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:09:47.967: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:09:49.889: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:09:51.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:09:53.267: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322181, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 15 00:09:56.320: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:09:57.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5263" for this suite.
STEP: Destroying namespace "webhook-5263-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:16.712 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":280,"completed":70,"skipped":1052,"failed":0}
SSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:09:57.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 00:09:57.269: INFO: Creating deployment "webserver-deployment"
Feb 15 00:09:57.324: INFO: Waiting for observed generation 1
Feb 15 00:10:00.671: INFO: Waiting for all required pods to come up
Feb 15 00:10:01.824: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 15 00:10:28.058: INFO: Waiting for deployment "webserver-deployment" to complete
Feb 15 00:10:28.067: INFO: Updating deployment "webserver-deployment" with a non-existent image
Feb 15 00:10:28.077: INFO: Updating deployment webserver-deployment
Feb 15 00:10:28.077: INFO: Waiting for observed generation 2
Feb 15 00:10:31.789: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 15 00:10:31.843: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 15 00:10:31.863: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Feb 15 00:10:32.050: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 15 00:10:32.050: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 15 00:10:32.053: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Feb 15 00:10:32.057: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Feb 15 00:10:32.057: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Feb 15 00:10:32.070: INFO: Updating deployment webserver-deployment
Feb 15 00:10:32.071: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Feb 15 00:10:33.493: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 15 00:10:36.665: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 15 00:10:38.393: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-9209 /apis/apps/v1/namespaces/deployment-9209/deployments/webserver-deployment b0c2d4be-31ab-4540-8007-050f84387eda 8478620 3 2020-02-15 00:09:57 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0059c3dc8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-15 00:10:33 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-02-15 00:10:35 +0000 UTC,LastTransitionTime:2020-02-15 00:09:57 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Feb 15 00:10:39.370: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-9209 /apis/apps/v1/namespaces/deployment-9209/replicasets/webserver-deployment-c7997dcc8 d5d24b00-4408-4598-a2c8-f2d75abb348d 8478618 3 2020-02-15 00:10:28 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment b0c2d4be-31ab-4540-8007-050f84387eda 0xc0053b4307 0xc0053b4308}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0053b4378  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 15 00:10:39.370: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Feb 15 00:10:39.370: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-9209 /apis/apps/v1/namespaces/deployment-9209/replicasets/webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 8478601 3 2020-02-15 00:09:57 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment b0c2d4be-31ab-4540-8007-050f84387eda 0xc0053b4247 0xc0053b4248}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0053b42a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Feb 15 00:10:40.921: INFO: Pod "webserver-deployment-595b5b9587-2cls5" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-2cls5 webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-2cls5 ad18e16b-d2e3-4f38-ae2f-2d9b94798fdd 8478483 0 2020-02-15 00:09:57 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc0053b4847 0xc0053b4848}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:09:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:09:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-02-15 00:09:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-15 00:10:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://e34e5ac9df8e4b558cdcdc6a31b33a35d232ca5e727ce11d09d88fe3958dc2b6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.922: INFO: Pod "webserver-deployment-595b5b9587-2n26l" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-2n26l webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-2n26l 52186652-9310-4502-8ce4-efa36bbc6760 8478444 0 2020-02-15 00:09:57 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc0053b49b0 0xc0053b49b1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:09:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.6,StartTime:2020-02-15 00:10:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-15 00:10:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://df9c67f7ad1dcea3f7df46255f2ddf21270659d611b6a8d742a3268a50f6fe54,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.922: INFO: Pod "webserver-deployment-595b5b9587-2xg5k" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-2xg5k webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-2xg5k f633090e-dc92-488a-bbb3-611c58e589c1 8478462 0 2020-02-15 00:09:57 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc0053b4b20 0xc0053b4b21}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:09:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:09:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-02-15 00:09:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-15 00:10:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://b6300036aead836dd930ddb28565005bde572076a3e1a24c1e24993fa8c465e5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.922: INFO: Pod "webserver-deployment-595b5b9587-5jfcb" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5jfcb webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-5jfcb d97d6962-f9f8-4ce1-87ad-9da9c7306a2a 8478596 0 2020-02-15 00:10:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc0053b4c90 0xc0053b4c91}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.922: INFO: Pod "webserver-deployment-595b5b9587-68sbr" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-68sbr webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-68sbr 4b4ca30f-bdb9-444a-a4f3-3ee4a36c7d0d 8478480 0 2020-02-15 00:09:57 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc0053b4da7 0xc0053b4da8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:09:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:09:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-02-15 00:09:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-15 00:10:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://c7405a97f36135ebdc48d255c8e3a30b735eaa67bb646633b4e403edb96b70e9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.923: INFO: Pod "webserver-deployment-595b5b9587-6jml5" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6jml5 webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-6jml5 55f6aee8-6b82-47e5-ae14-95154893ef88 8478631 0 2020-02-15 00:10:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc0053b4f20 0xc0053b4f21}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-15 00:10:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.923: INFO: Pod "webserver-deployment-595b5b9587-b9dxb" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-b9dxb webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-b9dxb ff873892-1bbd-4f05-9464-6deb4a92a102 8478595 0 2020-02-15 00:10:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc0053b5087 0xc0053b5088}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.923: INFO: Pod "webserver-deployment-595b5b9587-cvh2q" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-cvh2q webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-cvh2q 6ec002c7-3619-4f33-b9e5-f689b31501b2 8478466 0 2020-02-15 00:09:57 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc0053b5197 0xc0053b5198}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:09:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:09:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-02-15 00:09:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-15 00:10:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://5ebf0cdb15037d8e424f5be57228f666bd91cce087785759025ec7b1f38ce119,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.924: INFO: Pod "webserver-deployment-595b5b9587-cznc6" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-cznc6 webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-cznc6 c5472dbf-202d-4dd0-806b-055b59473bd4 8478622 0 2020-02-15 00:10:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc0053b5330 0xc0053b5331}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-15 00:10:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.924: INFO: Pod "webserver-deployment-595b5b9587-gll8k" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gll8k webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-gll8k 1177639d-d589-4b38-8e27-95fe506163f4 8478590 0 2020-02-15 00:10:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc0053b5487 0xc0053b5488}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.924: INFO: Pod "webserver-deployment-595b5b9587-hz6fv" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-hz6fv webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-hz6fv 1da15a27-95c7-4634-9606-fce3107234ea 8478626 0 2020-02-15 00:10:32 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc0053b5597 0xc0053b5598}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-15 00:10:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.925: INFO: Pod "webserver-deployment-595b5b9587-p9fm2" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-p9fm2 webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-p9fm2 baa67dc1-6aba-4ffd-a671-f54d8b3a186b 8478599 0 2020-02-15 00:10:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc0053b56f7 0xc0053b56f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.925: INFO: Pod "webserver-deployment-595b5b9587-qq9qr" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qq9qr webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-qq9qr 8db0ae8c-163b-497a-8c8c-ff413edb97be 8478629 0 2020-02-15 00:10:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc0053b5807 0xc0053b5808}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-15 00:10:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.925: INFO: Pod "webserver-deployment-595b5b9587-rhllb" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rhllb webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-rhllb c753e453-35fa-4a85-96dc-df6be2e3c9ea 8478459 0 2020-02-15 00:09:57 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc0053b5957 0xc0053b5958}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:09:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-02-15 00:10:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-15 00:10:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://07a83f3303bffe482c7ca80fadffa9cce3128e0bfa1158165befcb48826c48fc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.925: INFO: Pod "webserver-deployment-595b5b9587-sjt44" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-sjt44 webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-sjt44 614834b2-f32b-47cb-a171-cd5ab0819a96 8478597 0 2020-02-15 00:10:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc0053b5ad0 0xc0053b5ad1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.926: INFO: Pod "webserver-deployment-595b5b9587-sxp62" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-sxp62 webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-sxp62 4323beab-0bbe-4209-bde8-4ff326832916 8478588 0 2020-02-15 00:10:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc0053b5be7 0xc0053b5be8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.926: INFO: Pod "webserver-deployment-595b5b9587-t9lr2" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-t9lr2 webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-t9lr2 39fa946b-dd35-4e24-98fd-49b9c4f7c0d2 8478598 0 2020-02-15 00:10:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc0053b5d07 0xc0053b5d08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.926: INFO: Pod "webserver-deployment-595b5b9587-v4v4t" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-v4v4t webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-v4v4t 5a038bcf-ee8d-4caa-a1c0-357c1d83e529 8478477 0 2020-02-15 00:09:57 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc0053b5e17 0xc0053b5e18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:09:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:09:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-02-15 00:09:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-15 00:10:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://25bd1f886ac294d1b16ceb2f14878d299c34f2bf41cad0e8c8873ac4fd918558,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.926: INFO: Pod "webserver-deployment-595b5b9587-w2qfw" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w2qfw webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-w2qfw 9b899cf1-d683-473a-a44e-20566085f4eb 8478568 0 2020-02-15 00:10:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc0053b5f80 0xc0053b5f81}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.927: INFO: Pod "webserver-deployment-595b5b9587-zsx6b" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zsx6b webserver-deployment-595b5b9587- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-595b5b9587-zsx6b 639f069a-6f4f-4436-ab65-1e3a28dcc4db 8478452 0 2020-02-15 00:09:57 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 084bbf2e-2806-4ada-afdf-4df8ac817a67 0xc00535a097 0xc00535a098}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:09:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:09:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-02-15 00:09:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-15 00:10:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://0c7c450d31e353a9bfc0e1cf03638ee5d8e6ddd4b9440579289786693e9dad75,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.927: INFO: Pod "webserver-deployment-c7997dcc8-2kzbl" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2kzbl webserver-deployment-c7997dcc8- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-c7997dcc8-2kzbl e0059a88-f1dd-4b41-bc91-63dc1ac9a39c 8478544 0 2020-02-15 00:10:28 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d5d24b00-4408-4598-a2c8-f2d75abb348d 0xc00535a200 0xc00535a201}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-15 00:10:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.927: INFO: Pod "webserver-deployment-c7997dcc8-2nf95" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2nf95 webserver-deployment-c7997dcc8- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-c7997dcc8-2nf95 aba7c026-080a-4df0-b3e7-6256ce43a0d5 8478594 0 2020-02-15 00:10:33 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d5d24b00-4408-4598-a2c8-f2d75abb348d 0xc00535a377 0xc00535a378}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.927: INFO: Pod "webserver-deployment-c7997dcc8-495ph" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-495ph webserver-deployment-c7997dcc8- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-c7997dcc8-495ph fec272ea-8738-4bdd-b5db-7742d3e964d1 8478514 0 2020-02-15 00:10:28 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d5d24b00-4408-4598-a2c8-f2d75abb348d 0xc00535a4a7 0xc00535a4a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-15 00:10:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.928: INFO: Pod "webserver-deployment-c7997dcc8-49fn7" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-49fn7 webserver-deployment-c7997dcc8- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-c7997dcc8-49fn7 50cb144c-32b8-4d70-b56a-ad5a9f35a0c0 8478591 0 2020-02-15 00:10:33 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d5d24b00-4408-4598-a2c8-f2d75abb348d 0xc00535a637 0xc00535a638}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.928: INFO: Pod "webserver-deployment-c7997dcc8-56jkc" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-56jkc webserver-deployment-c7997dcc8- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-c7997dcc8-56jkc a5b2cbe3-1677-45bf-9120-171b1b9d4936 8478592 0 2020-02-15 00:10:33 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d5d24b00-4408-4598-a2c8-f2d75abb348d 0xc00535a767 0xc00535a768}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.928: INFO: Pod "webserver-deployment-c7997dcc8-8mrn5" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8mrn5 webserver-deployment-c7997dcc8- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-c7997dcc8-8mrn5 866c18f1-5697-412a-bdac-d704a2bb9cd0 8478539 0 2020-02-15 00:10:28 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d5d24b00-4408-4598-a2c8-f2d75abb348d 0xc00535a897 0xc00535a898}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-15 00:10:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.928: INFO: Pod "webserver-deployment-c7997dcc8-d77lv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d77lv webserver-deployment-c7997dcc8- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-c7997dcc8-d77lv e58683f4-fbf2-4528-8efb-3caedf97ec19 8478519 0 2020-02-15 00:10:28 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d5d24b00-4408-4598-a2c8-f2d75abb348d 0xc00535aa07 0xc00535aa08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-15 00:10:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.929: INFO: Pod "webserver-deployment-c7997dcc8-dd9dj" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dd9dj webserver-deployment-c7997dcc8- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-c7997dcc8-dd9dj 5b241d60-e36c-4d7f-8613-d1db56a2fd40 8478616 0 2020-02-15 00:10:33 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d5d24b00-4408-4598-a2c8-f2d75abb348d 0xc00535ab77 0xc00535ab78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-15 00:10:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.929: INFO: Pod "webserver-deployment-c7997dcc8-kgdjm" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kgdjm webserver-deployment-c7997dcc8- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-c7997dcc8-kgdjm 7daf4626-40b1-48df-9a47-cd6c5e0b7288 8478536 0 2020-02-15 00:10:28 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d5d24b00-4408-4598-a2c8-f2d75abb348d 0xc00535ace7 0xc00535ace8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-15 00:10:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.929: INFO: Pod "webserver-deployment-c7997dcc8-l7px7" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-l7px7 webserver-deployment-c7997dcc8- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-c7997dcc8-l7px7 5dcf9751-caae-4422-8965-532df34ea46a 8478615 0 2020-02-15 00:10:33 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d5d24b00-4408-4598-a2c8-f2d75abb348d 0xc00535ae67 0xc00535ae68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.929: INFO: Pod "webserver-deployment-c7997dcc8-s4xfq" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s4xfq webserver-deployment-c7997dcc8- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-c7997dcc8-s4xfq e65017a1-ca60-4cff-b7d6-724b3fc19400 8478593 0 2020-02-15 00:10:33 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d5d24b00-4408-4598-a2c8-f2d75abb348d 0xc00535af87 0xc00535af88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.930: INFO: Pod "webserver-deployment-c7997dcc8-ts7tp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ts7tp webserver-deployment-c7997dcc8- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-c7997dcc8-ts7tp f8583d90-4025-4e36-96cb-da31008bd031 8478634 0 2020-02-15 00:10:33 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d5d24b00-4408-4598-a2c8-f2d75abb348d 0xc00535b0b7 0xc00535b0b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-15 00:10:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 00:10:40.930: INFO: Pod "webserver-deployment-c7997dcc8-w874f" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w874f webserver-deployment-c7997dcc8- deployment-9209 /api/v1/namespaces/deployment-9209/pods/webserver-deployment-c7997dcc8-w874f fcc95b9f-1cb1-4ba0-8ebe-e6e9316a5106 8478587 0 2020-02-15 00:10:33 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d5d24b00-4408-4598-a2c8-f2d75abb348d 0xc00535b237 0xc00535b238}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrbxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrbxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrbxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:10:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:10:40.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9209" for this suite.

• [SLOW TEST:46.223 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":280,"completed":71,"skipped":1056,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:10:43.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-30e7828f-5bc3-4c1f-bc12-869ec4c99549
STEP: Creating a pod to test consume configMaps
Feb 15 00:10:49.614: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c" in namespace "projected-6152" to be "success or failure"
Feb 15 00:10:49.759: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 145.133309ms
Feb 15 00:10:53.429: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.814646022s
Feb 15 00:10:57.131: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.516817226s
Feb 15 00:11:03.108: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.493725705s
Feb 15 00:11:05.870: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.255919363s
Feb 15 00:11:08.724: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.109874941s
Feb 15 00:11:12.350: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.7362828s
Feb 15 00:11:15.743: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.128972306s
Feb 15 00:11:19.441: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 29.8266833s
Feb 15 00:11:23.180: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 33.566263665s
Feb 15 00:11:25.262: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 35.647721558s
Feb 15 00:11:27.491: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 37.877038377s
Feb 15 00:11:29.779: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 40.165114435s
Feb 15 00:11:31.816: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 42.202478728s
Feb 15 00:11:34.021: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 44.406870175s
Feb 15 00:11:37.813: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 48.199057951s
Feb 15 00:11:40.036: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 50.421713851s
Feb 15 00:11:42.994: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 53.379587555s
Feb 15 00:11:45.279: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 55.66497114s
Feb 15 00:11:47.472: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 57.857650698s
Feb 15 00:11:49.703: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.089095279s
Feb 15 00:11:51.709: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.095285183s
Feb 15 00:11:53.717: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m4.103379067s
STEP: Saw pod success
Feb 15 00:11:53.717: INFO: Pod "pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c" satisfied condition "success or failure"
Feb 15 00:11:53.722: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c container projected-configmap-volume-test: 
STEP: delete the pod
Feb 15 00:11:53.775: INFO: Waiting for pod pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c to disappear
Feb 15 00:11:53.785: INFO: Pod pod-projected-configmaps-048ad18a-9875-4350-ac71-d192bbd7b12c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:11:53.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6152" for this suite.

• [SLOW TEST:70.573 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":72,"skipped":1070,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:11:53.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3862
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-3862
STEP: creating replication controller externalsvc in namespace services-3862
I0215 00:11:54.419375      10 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3862, replica count: 2
I0215 00:11:57.471155      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 00:12:00.472295      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 00:12:03.473562      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 00:12:06.474126      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 00:12:09.474693      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Feb 15 00:12:09.532: INFO: Creating new exec pod
Feb 15 00:12:15.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3862 execpodn86k9 -- /bin/sh -x -c nslookup clusterip-service'
Feb 15 00:12:18.100: INFO: stderr: "I0215 00:12:17.856461    1139 log.go:172] (0xc000908e70) (0xc000c6e460) Create stream\nI0215 00:12:17.856632    1139 log.go:172] (0xc000908e70) (0xc000c6e460) Stream added, broadcasting: 1\nI0215 00:12:17.863186    1139 log.go:172] (0xc000908e70) Reply frame received for 1\nI0215 00:12:17.863265    1139 log.go:172] (0xc000908e70) (0xc000c6e500) Create stream\nI0215 00:12:17.863294    1139 log.go:172] (0xc000908e70) (0xc000c6e500) Stream added, broadcasting: 3\nI0215 00:12:17.865649    1139 log.go:172] (0xc000908e70) Reply frame received for 3\nI0215 00:12:17.865681    1139 log.go:172] (0xc000908e70) (0xc000c6e5a0) Create stream\nI0215 00:12:17.865693    1139 log.go:172] (0xc000908e70) (0xc000c6e5a0) Stream added, broadcasting: 5\nI0215 00:12:17.867947    1139 log.go:172] (0xc000908e70) Reply frame received for 5\nI0215 00:12:17.988872    1139 log.go:172] (0xc000908e70) Data frame received for 5\nI0215 00:12:17.988904    1139 log.go:172] (0xc000c6e5a0) (5) Data frame handling\nI0215 00:12:17.988929    1139 log.go:172] (0xc000c6e5a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0215 00:12:18.017222    1139 log.go:172] (0xc000908e70) Data frame received for 3\nI0215 00:12:18.017258    1139 log.go:172] (0xc000c6e500) (3) Data frame handling\nI0215 00:12:18.017288    1139 log.go:172] (0xc000c6e500) (3) Data frame sent\nI0215 00:12:18.018231    1139 log.go:172] (0xc000908e70) Data frame received for 3\nI0215 00:12:18.018239    1139 log.go:172] (0xc000c6e500) (3) Data frame handling\nI0215 00:12:18.018251    1139 log.go:172] (0xc000c6e500) (3) Data frame sent\nI0215 00:12:18.091306    1139 log.go:172] (0xc000908e70) (0xc000c6e500) Stream removed, broadcasting: 3\nI0215 00:12:18.091514    1139 log.go:172] (0xc000908e70) Data frame received for 1\nI0215 00:12:18.091576    1139 log.go:172] (0xc000c6e460) (1) Data frame handling\nI0215 00:12:18.091639    1139 log.go:172] (0xc000c6e460) (1) Data frame sent\nI0215 00:12:18.091674    1139 log.go:172] (0xc000908e70) (0xc000c6e460) Stream removed, broadcasting: 1\nI0215 00:12:18.091754    1139 log.go:172] (0xc000908e70) (0xc000c6e5a0) Stream removed, broadcasting: 5\nI0215 00:12:18.091817    1139 log.go:172] (0xc000908e70) Go away received\nI0215 00:12:18.092549    1139 log.go:172] (0xc000908e70) (0xc000c6e460) Stream removed, broadcasting: 1\nI0215 00:12:18.092569    1139 log.go:172] (0xc000908e70) (0xc000c6e500) Stream removed, broadcasting: 3\nI0215 00:12:18.092577    1139 log.go:172] (0xc000908e70) (0xc000c6e5a0) Stream removed, broadcasting: 5\n"
Feb 15 00:12:18.100: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3862.svc.cluster.local\tcanonical name = externalsvc.services-3862.svc.cluster.local.\nName:\texternalsvc.services-3862.svc.cluster.local\nAddress: 10.96.41.225\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-3862, will wait for the garbage collector to delete the pods
Feb 15 00:12:18.167: INFO: Deleting ReplicationController externalsvc took: 12.153898ms
Feb 15 00:12:18.569: INFO: Terminating ReplicationController externalsvc pods took: 401.354192ms
Feb 15 00:12:33.201: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:12:33.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3862" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:39.283 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":280,"completed":73,"skipped":1087,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:12:33.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 15 00:12:34.617: INFO: Pod name wrapped-volume-race-930927d6-8c6e-4d27-8b31-00165e97537f: Found 0 pods out of 5
Feb 15 00:12:39.627: INFO: Pod name wrapped-volume-race-930927d6-8c6e-4d27-8b31-00165e97537f: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-930927d6-8c6e-4d27-8b31-00165e97537f in namespace emptydir-wrapper-4856, will wait for the garbage collector to delete the pods
Feb 15 00:13:07.732: INFO: Deleting ReplicationController wrapped-volume-race-930927d6-8c6e-4d27-8b31-00165e97537f took: 10.495352ms
Feb 15 00:13:08.133: INFO: Terminating ReplicationController wrapped-volume-race-930927d6-8c6e-4d27-8b31-00165e97537f pods took: 401.401069ms
STEP: Creating RC which spawns configmap-volume pods
Feb 15 00:13:32.577: INFO: Pod name wrapped-volume-race-925be4ac-ede1-4034-b929-528baa415817: Found 0 pods out of 5
Feb 15 00:13:37.613: INFO: Pod name wrapped-volume-race-925be4ac-ede1-4034-b929-528baa415817: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-925be4ac-ede1-4034-b929-528baa415817 in namespace emptydir-wrapper-4856, will wait for the garbage collector to delete the pods
Feb 15 00:14:07.825: INFO: Deleting ReplicationController wrapped-volume-race-925be4ac-ede1-4034-b929-528baa415817 took: 37.023699ms
Feb 15 00:14:08.628: INFO: Terminating ReplicationController wrapped-volume-race-925be4ac-ede1-4034-b929-528baa415817 pods took: 802.312614ms
STEP: Creating RC which spawns configmap-volume pods
Feb 15 00:14:23.641: INFO: Pod name wrapped-volume-race-438bd3c8-37a1-4c8f-9f74-ae6ab2fe9e4d: Found 0 pods out of 5
Feb 15 00:14:28.729: INFO: Pod name wrapped-volume-race-438bd3c8-37a1-4c8f-9f74-ae6ab2fe9e4d: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-438bd3c8-37a1-4c8f-9f74-ae6ab2fe9e4d in namespace emptydir-wrapper-4856, will wait for the garbage collector to delete the pods
Feb 15 00:14:54.837: INFO: Deleting ReplicationController wrapped-volume-race-438bd3c8-37a1-4c8f-9f74-ae6ab2fe9e4d took: 8.722984ms
Feb 15 00:14:55.338: INFO: Terminating ReplicationController wrapped-volume-race-438bd3c8-37a1-4c8f-9f74-ae6ab2fe9e4d pods took: 500.663774ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:15:14.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4856" for this suite.

• [SLOW TEST:160.981 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":280,"completed":74,"skipped":1121,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:15:14.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-bbf7d29a-66d2-427a-bac0-8aa50ee12f88
STEP: Creating a pod to test consume configMaps
Feb 15 00:15:14.468: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e72d30b8-69d6-4f97-8c23-0a285fc7c240" in namespace "projected-904" to be "success or failure"
Feb 15 00:15:14.526: INFO: Pod "pod-projected-configmaps-e72d30b8-69d6-4f97-8c23-0a285fc7c240": Phase="Pending", Reason="", readiness=false. Elapsed: 58.151141ms
Feb 15 00:15:16.540: INFO: Pod "pod-projected-configmaps-e72d30b8-69d6-4f97-8c23-0a285fc7c240": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071374509s
Feb 15 00:15:18.632: INFO: Pod "pod-projected-configmaps-e72d30b8-69d6-4f97-8c23-0a285fc7c240": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164141456s
Feb 15 00:15:20.653: INFO: Pod "pod-projected-configmaps-e72d30b8-69d6-4f97-8c23-0a285fc7c240": Phase="Pending", Reason="", readiness=false. Elapsed: 6.184994353s
Feb 15 00:15:22.783: INFO: Pod "pod-projected-configmaps-e72d30b8-69d6-4f97-8c23-0a285fc7c240": Phase="Pending", Reason="", readiness=false. Elapsed: 8.314219601s
Feb 15 00:15:24.813: INFO: Pod "pod-projected-configmaps-e72d30b8-69d6-4f97-8c23-0a285fc7c240": Phase="Pending", Reason="", readiness=false. Elapsed: 10.344962647s
Feb 15 00:15:26.823: INFO: Pod "pod-projected-configmaps-e72d30b8-69d6-4f97-8c23-0a285fc7c240": Phase="Pending", Reason="", readiness=false. Elapsed: 12.354210629s
Feb 15 00:15:28.830: INFO: Pod "pod-projected-configmaps-e72d30b8-69d6-4f97-8c23-0a285fc7c240": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.361483141s
STEP: Saw pod success
Feb 15 00:15:28.830: INFO: Pod "pod-projected-configmaps-e72d30b8-69d6-4f97-8c23-0a285fc7c240" satisfied condition "success or failure"
Feb 15 00:15:28.833: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-e72d30b8-69d6-4f97-8c23-0a285fc7c240 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 15 00:15:28.936: INFO: Waiting for pod pod-projected-configmaps-e72d30b8-69d6-4f97-8c23-0a285fc7c240 to disappear
Feb 15 00:15:28.942: INFO: Pod pod-projected-configmaps-e72d30b8-69d6-4f97-8c23-0a285fc7c240 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:15:28.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-904" for this suite.

• [SLOW TEST:14.695 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":75,"skipped":1129,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:15:28.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test substitution in container's args
Feb 15 00:15:29.119: INFO: Waiting up to 5m0s for pod "var-expansion-4ca3448e-1c37-4c40-85dd-edb130159cf9" in namespace "var-expansion-1563" to be "success or failure"
Feb 15 00:15:29.132: INFO: Pod "var-expansion-4ca3448e-1c37-4c40-85dd-edb130159cf9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.388942ms
Feb 15 00:15:31.162: INFO: Pod "var-expansion-4ca3448e-1c37-4c40-85dd-edb130159cf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043196299s
Feb 15 00:15:33.226: INFO: Pod "var-expansion-4ca3448e-1c37-4c40-85dd-edb130159cf9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107246651s
Feb 15 00:15:35.232: INFO: Pod "var-expansion-4ca3448e-1c37-4c40-85dd-edb130159cf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.11245902s
STEP: Saw pod success
Feb 15 00:15:35.232: INFO: Pod "var-expansion-4ca3448e-1c37-4c40-85dd-edb130159cf9" satisfied condition "success or failure"
Feb 15 00:15:35.235: INFO: Trying to get logs from node jerma-node pod var-expansion-4ca3448e-1c37-4c40-85dd-edb130159cf9 container dapi-container: 
STEP: delete the pod
Feb 15 00:15:35.274: INFO: Waiting for pod var-expansion-4ca3448e-1c37-4c40-85dd-edb130159cf9 to disappear
Feb 15 00:15:35.378: INFO: Pod var-expansion-4ca3448e-1c37-4c40-85dd-edb130159cf9 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:15:35.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1563" for this suite.

• [SLOW TEST:6.440 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":280,"completed":76,"skipped":1139,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:15:35.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-6943
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-6943
STEP: Creating statefulset with conflicting port in namespace statefulset-6943
STEP: Waiting until pod test-pod will start running in namespace statefulset-6943
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6943
Feb 15 00:15:45.697: INFO: Observed stateful pod in namespace: statefulset-6943, name: ss-0, uid: 1aa23044-7a2c-4328-b02d-5d9514797a5f, status phase: Pending. Waiting for statefulset controller to delete.
Feb 15 00:15:45.710: INFO: Observed stateful pod in namespace: statefulset-6943, name: ss-0, uid: 1aa23044-7a2c-4328-b02d-5d9514797a5f, status phase: Failed. Waiting for statefulset controller to delete.
Feb 15 00:15:45.760: INFO: Observed stateful pod in namespace: statefulset-6943, name: ss-0, uid: 1aa23044-7a2c-4328-b02d-5d9514797a5f, status phase: Failed. Waiting for statefulset controller to delete.
Feb 15 00:15:45.794: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6943
STEP: Removing pod with conflicting port in namespace statefulset-6943
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6943 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 15 00:15:56.205: INFO: Deleting all statefulset in ns statefulset-6943
Feb 15 00:15:56.211: INFO: Scaling statefulset ss to 0
Feb 15 00:16:06.285: INFO: Waiting for statefulset status.replicas updated to 0
Feb 15 00:16:06.290: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:16:06.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6943" for this suite.

• [SLOW TEST:30.960 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":280,"completed":77,"skipped":1168,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:16:06.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 00:16:06.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5323'
Feb 15 00:16:07.216: INFO: stderr: ""
Feb 15 00:16:07.216: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Feb 15 00:16:07.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5323'
Feb 15 00:16:07.716: INFO: stderr: ""
Feb 15 00:16:07.717: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 15 00:16:08.764: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 00:16:08.765: INFO: Found 0 / 1
Feb 15 00:16:09.725: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 00:16:09.725: INFO: Found 0 / 1
Feb 15 00:16:10.756: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 00:16:10.757: INFO: Found 0 / 1
Feb 15 00:16:11.726: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 00:16:11.726: INFO: Found 0 / 1
Feb 15 00:16:12.928: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 00:16:12.928: INFO: Found 0 / 1
Feb 15 00:16:13.733: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 00:16:13.734: INFO: Found 0 / 1
Feb 15 00:16:14.727: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 00:16:14.727: INFO: Found 1 / 1
Feb 15 00:16:14.727: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 15 00:16:14.734: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 00:16:14.734: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 15 00:16:14.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-sdn72 --namespace=kubectl-5323'
Feb 15 00:16:14.893: INFO: stderr: ""
Feb 15 00:16:14.893: INFO: stdout: "Name:         agnhost-master-sdn72\nNamespace:    kubectl-5323\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Sat, 15 Feb 2020 00:16:07 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nIPs:\n  IP:           10.44.0.1\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://dd5bbf8a406c2add7b220bab5e079c612924b2583b80eb77bf4718b73f413375\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 15 Feb 2020 00:16:13 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rn8z2 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-rn8z2:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-rn8z2\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-5323/agnhost-master-sdn72 to jerma-node\n  Normal  Pulled     4s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    1s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    0s         kubelet, jerma-node  Started container agnhost-master\n"
Feb 15 00:16:14.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-5323'
Feb 15 00:16:15.100: INFO: stderr: ""
Feb 15 00:16:15.101: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-5323\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: agnhost-master-sdn72\n"
Feb 15 00:16:15.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-5323'
Feb 15 00:16:15.249: INFO: stderr: ""
Feb 15 00:16:15.249: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-5323\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.251.89\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb 15 00:16:15.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Feb 15 00:16:15.393: INFO: stderr: ""
Feb 15 00:16:15.393: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Sat, 15 Feb 2020 00:16:06 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sat, 15 Feb 2020 00:16:01 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sat, 15 Feb 2020 00:16:01 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sat, 15 Feb 2020 00:16:01 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sat, 15 Feb 2020 00:16:01 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (3 in total)\n  Namespace                   Name                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                    ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-dsf66        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41d\n  kube-system                 weave-net-kz8lv         20m (0%)      0 (0%)      0 (0%)           0 (0%)         41d\n  kubectl-5323                agnhost-master-sdn72    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb 15 00:16:15.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5323'
Feb 15 00:16:15.473: INFO: stderr: ""
Feb 15 00:16:15.473: INFO: stdout: "Name:         kubectl-5323\nLabels:       e2e-framework=kubectl\n              e2e-run=7274ae64-4b01-48a3-8283-13f646657458\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:16:15.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5323" for this suite.

• [SLOW TEST:9.119 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1156
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":280,"completed":78,"skipped":1213,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:16:15.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0215 00:16:18.739910      10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 15 00:16:18.739: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:16:18.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9876" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":280,"completed":79,"skipped":1226,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:16:18.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-49d46bd9-9d6c-4258-8ae0-9154cb5fed95 in namespace container-probe-5983
Feb 15 00:16:37.187: INFO: Started pod liveness-49d46bd9-9d6c-4258-8ae0-9154cb5fed95 in namespace container-probe-5983
STEP: checking the pod's current state and verifying that restartCount is present
Feb 15 00:16:37.190: INFO: Initial restart count of pod liveness-49d46bd9-9d6c-4258-8ae0-9154cb5fed95 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:20:39.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5983" for this suite.

• [SLOW TEST:260.568 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":280,"completed":80,"skipped":1258,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:20:39.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Feb 15 00:20:58.000: INFO: Successfully updated pod "adopt-release-f94fx"
STEP: Checking that the Job readopts the Pod
Feb 15 00:20:58.000: INFO: Waiting up to 15m0s for pod "adopt-release-f94fx" in namespace "job-2262" to be "adopted"
Feb 15 00:20:58.019: INFO: Pod "adopt-release-f94fx": Phase="Running", Reason="", readiness=true. Elapsed: 18.741266ms
Feb 15 00:21:00.028: INFO: Pod "adopt-release-f94fx": Phase="Running", Reason="", readiness=true. Elapsed: 2.0272977s
Feb 15 00:21:00.028: INFO: Pod "adopt-release-f94fx" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Feb 15 00:21:00.558: INFO: Successfully updated pod "adopt-release-f94fx"
STEP: Checking that the Job releases the Pod
Feb 15 00:21:00.559: INFO: Waiting up to 15m0s for pod "adopt-release-f94fx" in namespace "job-2262" to be "released"
Feb 15 00:21:00.616: INFO: Pod "adopt-release-f94fx": Phase="Running", Reason="", readiness=true. Elapsed: 57.298923ms
Feb 15 00:21:02.778: INFO: Pod "adopt-release-f94fx": Phase="Running", Reason="", readiness=true. Elapsed: 2.218751976s
Feb 15 00:21:02.778: INFO: Pod "adopt-release-f94fx" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:21:02.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2262" for this suite.

• [SLOW TEST:23.483 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":280,"completed":81,"skipped":1266,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:21:02.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1466
STEP: creating an pod
Feb 15 00:21:03.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-7791 -- logs-generator --log-lines-total 100 --run-duration 20s'
Feb 15 00:21:03.948: INFO: stderr: ""
Feb 15 00:21:03.948: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Waiting for log generator to start.
Feb 15 00:21:03.948: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Feb 15 00:21:03.948: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7791" to be "running and ready, or succeeded"
Feb 15 00:21:03.952: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.228165ms
Feb 15 00:21:05.963: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014732567s
Feb 15 00:21:07.993: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045093168s
Feb 15 00:21:12.648: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.700060082s
Feb 15 00:21:14.659: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.710965102s
Feb 15 00:21:16.666: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 12.717767923s
Feb 15 00:21:18.673: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 14.724756515s
Feb 15 00:21:18.673: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Feb 15 00:21:18.673: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Feb 15 00:21:18.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7791'
Feb 15 00:21:18.835: INFO: stderr: ""
Feb 15 00:21:18.836: INFO: stdout: "I0215 00:21:16.353109       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/j6k 593\nI0215 00:21:16.553463       1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/sc4k 378\nI0215 00:21:16.753478       1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/v7f7 518\nI0215 00:21:16.953773       1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/l2bv 533\nI0215 00:21:17.153716       1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/b8h 569\nI0215 00:21:17.353470       1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/xqm9 453\nI0215 00:21:17.553562       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/j8n 264\nI0215 00:21:17.753474       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/d669 212\nI0215 00:21:17.953497       1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/nm9 501\nI0215 00:21:18.153310       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/zhgc 556\nI0215 00:21:18.354176       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/2dkj 495\nI0215 00:21:18.553707       1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/x9q 599\nI0215 00:21:18.753401       1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/w9w 422\n"
STEP: limiting log lines
Feb 15 00:21:18.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7791 --tail=1'
Feb 15 00:21:18.983: INFO: stderr: ""
Feb 15 00:21:18.983: INFO: stdout: "I0215 00:21:18.953400       1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/psl 402\n"
Feb 15 00:21:18.983: INFO: got output "I0215 00:21:18.953400       1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/psl 402\n"
STEP: limiting log bytes
Feb 15 00:21:18.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7791 --limit-bytes=1'
Feb 15 00:21:19.168: INFO: stderr: ""
Feb 15 00:21:19.168: INFO: stdout: "I"
Feb 15 00:21:19.168: INFO: got output "I"
STEP: exposing timestamps
Feb 15 00:21:19.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7791 --tail=1 --timestamps'
Feb 15 00:21:19.266: INFO: stderr: ""
Feb 15 00:21:19.266: INFO: stdout: "2020-02-15T00:21:19.154182953Z I0215 00:21:19.153868       1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/tbf 421\n"
Feb 15 00:21:19.266: INFO: got output "2020-02-15T00:21:19.154182953Z I0215 00:21:19.153868       1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/tbf 421\n"
STEP: restricting to a time range
Feb 15 00:21:21.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7791 --since=1s'
Feb 15 00:21:22.004: INFO: stderr: ""
Feb 15 00:21:22.004: INFO: stdout: "I0215 00:21:21.154334       1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/fjcc 310\nI0215 00:21:21.353695       1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/qq9 468\nI0215 00:21:21.553466       1 logs_generator.go:76] 26 PUT /api/v1/namespaces/default/pods/jqqb 409\nI0215 00:21:21.753467       1 logs_generator.go:76] 27 PUT /api/v1/namespaces/ns/pods/tz8 591\nI0215 00:21:21.953556       1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/qsc 284\n"
Feb 15 00:21:22.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7791 --since=24h'
Feb 15 00:21:22.185: INFO: stderr: ""
Feb 15 00:21:22.185: INFO: stdout: "I0215 00:21:16.353109       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/j6k 593\nI0215 00:21:16.553463       1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/sc4k 378\nI0215 00:21:16.753478       1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/v7f7 518\nI0215 00:21:16.953773       1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/l2bv 533\nI0215 00:21:17.153716       1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/b8h 569\nI0215 00:21:17.353470       1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/xqm9 453\nI0215 00:21:17.553562       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/j8n 264\nI0215 00:21:17.753474       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/d669 212\nI0215 00:21:17.953497       1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/nm9 501\nI0215 00:21:18.153310       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/zhgc 556\nI0215 00:21:18.354176       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/2dkj 495\nI0215 00:21:18.553707       1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/x9q 599\nI0215 00:21:18.753401       1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/w9w 422\nI0215 00:21:18.953400       1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/psl 402\nI0215 00:21:19.153868       1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/tbf 421\nI0215 00:21:19.353346       1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/qss 472\nI0215 00:21:19.553489       1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/9dnj 479\nI0215 00:21:19.753661       1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/gbq 280\nI0215 00:21:19.953409       1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/j5xp 377\nI0215 00:21:20.153339       1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/np2q 476\nI0215 00:21:20.353752       1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/7kx 456\nI0215 00:21:20.553827       1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/shzc 373\nI0215 00:21:20.753434       1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/72nh 341\nI0215 00:21:20.953463       1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/2mzw 533\nI0215 00:21:21.154334       1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/fjcc 310\nI0215 00:21:21.353695       1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/qq9 468\nI0215 00:21:21.553466       1 logs_generator.go:76] 26 PUT /api/v1/namespaces/default/pods/jqqb 409\nI0215 00:21:21.753467       1 logs_generator.go:76] 27 PUT /api/v1/namespaces/ns/pods/tz8 591\nI0215 00:21:21.953556       1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/qsc 284\nI0215 00:21:22.153900       1 logs_generator.go:76] 29 GET /api/v1/namespaces/default/pods/jz4 390\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1472
Feb 15 00:21:22.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-7791'
Feb 15 00:21:26.893: INFO: stderr: ""
Feb 15 00:21:26.893: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:21:26.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7791" for this suite.

• [SLOW TEST:24.102 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1462
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":280,"completed":82,"skipped":1271,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:21:26.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb 15 00:21:35.510: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-51 pod-service-account-32a6c4fc-e993-4088-8d5e-28b9387dceba -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb 15 00:21:35.984: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-51 pod-service-account-32a6c4fc-e993-4088-8d5e-28b9387dceba -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb 15 00:21:36.393: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-51 pod-service-account-32a6c4fc-e993-4088-8d5e-28b9387dceba -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:21:36.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-51" for this suite.

• [SLOW TEST:9.887 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":280,"completed":83,"skipped":1299,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:21:36.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 00:21:36.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Feb 15 00:21:39.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1172 create -f -'
Feb 15 00:21:44.692: INFO: stderr: ""
Feb 15 00:21:44.692: INFO: stdout: "e2e-test-crd-publish-openapi-1798-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Feb 15 00:21:44.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1172 delete e2e-test-crd-publish-openapi-1798-crds test-foo'
Feb 15 00:21:44.953: INFO: stderr: ""
Feb 15 00:21:44.954: INFO: stdout: "e2e-test-crd-publish-openapi-1798-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Feb 15 00:21:44.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1172 apply -f -'
Feb 15 00:21:45.406: INFO: stderr: ""
Feb 15 00:21:45.406: INFO: stdout: "e2e-test-crd-publish-openapi-1798-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Feb 15 00:21:45.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1172 delete e2e-test-crd-publish-openapi-1798-crds test-foo'
Feb 15 00:21:45.594: INFO: stderr: ""
Feb 15 00:21:45.594: INFO: stdout: "e2e-test-crd-publish-openapi-1798-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Feb 15 00:21:45.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1172 create -f -'
Feb 15 00:21:46.100: INFO: rc: 1
Feb 15 00:21:46.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1172 apply -f -'
Feb 15 00:21:46.397: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Feb 15 00:21:46.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1172 create -f -'
Feb 15 00:21:46.965: INFO: rc: 1
Feb 15 00:21:46.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1172 apply -f -'
Feb 15 00:21:47.413: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Feb 15 00:21:47.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1798-crds'
Feb 15 00:21:47.736: INFO: stderr: ""
Feb 15 00:21:47.736: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1798-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Feb 15 00:21:47.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1798-crds.metadata'
Feb 15 00:21:48.118: INFO: stderr: ""
Feb 15 00:21:48.119: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1798-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Feb 15 00:21:48.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1798-crds.spec'
Feb 15 00:21:48.683: INFO: stderr: ""
Feb 15 00:21:48.683: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1798-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Feb 15 00:21:48.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1798-crds.spec.bars'
Feb 15 00:21:49.122: INFO: stderr: ""
Feb 15 00:21:49.122: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1798-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Feb 15 00:21:49.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1798-crds.spec.bars2'
Feb 15 00:21:49.446: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:21:53.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1172" for this suite.

• [SLOW TEST:16.433 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":280,"completed":84,"skipped":1308,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:21:53.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:75
Feb 15 00:21:53.275: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the sample API server.
Feb 15 00:21:53.866: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb 15 00:21:56.051: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322913, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322913, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322914, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322913, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:21:58.075: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322913, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322913, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322914, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322913, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:22:00.057: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322913, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322913, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322914, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322913, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:22:02.061: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322913, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322913, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322914, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322913, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:22:04.791: INFO: Waited 727.406844ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:66
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:22:05.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-6802" for this suite.

• [SLOW TEST:12.359 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":280,"completed":85,"skipped":1352,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:22:05.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service nodeport-test with type=NodePort in namespace services-9087
STEP: creating replication controller nodeport-test in namespace services-9087
I0215 00:22:05.808187      10 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-9087, replica count: 2
I0215 00:22:08.859714      10 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 00:22:11.860372      10 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 00:22:14.861114      10 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 00:22:17.861757      10 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 00:22:20.862600      10 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 00:22:23.863846      10 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 15 00:22:23.864: INFO: Creating new exec pod
Feb 15 00:22:32.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9087 execpod775kn -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Feb 15 00:22:33.336: INFO: stderr: "I0215 00:22:33.160337    1812 log.go:172] (0xc0000c4a50) (0xc00050a000) Create stream\nI0215 00:22:33.160461    1812 log.go:172] (0xc0000c4a50) (0xc00050a000) Stream added, broadcasting: 1\nI0215 00:22:33.164886    1812 log.go:172] (0xc0000c4a50) Reply frame received for 1\nI0215 00:22:33.164976    1812 log.go:172] (0xc0000c4a50) (0xc0006a1b80) Create stream\nI0215 00:22:33.164991    1812 log.go:172] (0xc0000c4a50) (0xc0006a1b80) Stream added, broadcasting: 3\nI0215 00:22:33.166691    1812 log.go:172] (0xc0000c4a50) Reply frame received for 3\nI0215 00:22:33.166746    1812 log.go:172] (0xc0000c4a50) (0xc0006a1d60) Create stream\nI0215 00:22:33.166773    1812 log.go:172] (0xc0000c4a50) (0xc0006a1d60) Stream added, broadcasting: 5\nI0215 00:22:33.170052    1812 log.go:172] (0xc0000c4a50) Reply frame received for 5\nI0215 00:22:33.245312    1812 log.go:172] (0xc0000c4a50) Data frame received for 5\nI0215 00:22:33.245367    1812 log.go:172] (0xc0006a1d60) (5) Data frame handling\nI0215 00:22:33.245450    1812 log.go:172] (0xc0006a1d60) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0215 00:22:33.259861    1812 log.go:172] (0xc0000c4a50) Data frame received for 5\nI0215 00:22:33.259896    1812 log.go:172] (0xc0006a1d60) (5) Data frame handling\nI0215 00:22:33.259920    1812 log.go:172] (0xc0006a1d60) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0215 00:22:33.325993    1812 log.go:172] (0xc0000c4a50) Data frame received for 1\nI0215 00:22:33.326055    1812 log.go:172] (0xc00050a000) (1) Data frame handling\nI0215 00:22:33.326080    1812 log.go:172] (0xc00050a000) (1) Data frame sent\nI0215 00:22:33.326146    1812 log.go:172] (0xc0000c4a50) (0xc00050a000) Stream removed, broadcasting: 1\nI0215 00:22:33.326242    1812 log.go:172] (0xc0000c4a50) (0xc0006a1b80) Stream removed, broadcasting: 3\nI0215 00:22:33.327684    1812 log.go:172] (0xc0000c4a50) (0xc0006a1d60) Stream removed, broadcasting: 5\nI0215 00:22:33.327860    1812 log.go:172] (0xc0000c4a50) Go away received\nI0215 00:22:33.328034    1812 log.go:172] (0xc0000c4a50) (0xc00050a000) Stream removed, broadcasting: 1\nI0215 00:22:33.328062    1812 log.go:172] (0xc0000c4a50) (0xc0006a1b80) Stream removed, broadcasting: 3\nI0215 00:22:33.328081    1812 log.go:172] (0xc0000c4a50) (0xc0006a1d60) Stream removed, broadcasting: 5\n"
Feb 15 00:22:33.336: INFO: stdout: ""
Feb 15 00:22:33.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9087 execpod775kn -- /bin/sh -x -c nc -zv -t -w 2 10.96.133.239 80'
Feb 15 00:22:33.720: INFO: stderr: "I0215 00:22:33.513387    1834 log.go:172] (0xc000a78b00) (0xc000abca00) Create stream\nI0215 00:22:33.513774    1834 log.go:172] (0xc000a78b00) (0xc000abca00) Stream added, broadcasting: 1\nI0215 00:22:33.529115    1834 log.go:172] (0xc000a78b00) Reply frame received for 1\nI0215 00:22:33.529230    1834 log.go:172] (0xc000a78b00) (0xc000adc000) Create stream\nI0215 00:22:33.529249    1834 log.go:172] (0xc000a78b00) (0xc000adc000) Stream added, broadcasting: 3\nI0215 00:22:33.530885    1834 log.go:172] (0xc000a78b00) Reply frame received for 3\nI0215 00:22:33.531015    1834 log.go:172] (0xc000a78b00) (0xc000adc0a0) Create stream\nI0215 00:22:33.531046    1834 log.go:172] (0xc000a78b00) (0xc000adc0a0) Stream added, broadcasting: 5\nI0215 00:22:33.532645    1834 log.go:172] (0xc000a78b00) Reply frame received for 5\nI0215 00:22:33.611044    1834 log.go:172] (0xc000a78b00) Data frame received for 5\nI0215 00:22:33.611164    1834 log.go:172] (0xc000adc0a0) (5) Data frame handling\nI0215 00:22:33.611208    1834 log.go:172] (0xc000adc0a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.133.239 80\nI0215 00:22:33.623038    1834 log.go:172] (0xc000a78b00) Data frame received for 5\nI0215 00:22:33.623594    1834 log.go:172] (0xc000adc0a0) (5) Data frame handling\nI0215 00:22:33.623653    1834 log.go:172] (0xc000adc0a0) (5) Data frame sent\nConnection to 10.96.133.239 80 port [tcp/http] succeeded!\nI0215 00:22:33.709765    1834 log.go:172] (0xc000a78b00) (0xc000adc000) Stream removed, broadcasting: 3\nI0215 00:22:33.709929    1834 log.go:172] (0xc000a78b00) Data frame received for 1\nI0215 00:22:33.709947    1834 log.go:172] (0xc000abca00) (1) Data frame handling\nI0215 00:22:33.709965    1834 log.go:172] (0xc000abca00) (1) Data frame sent\nI0215 00:22:33.710032    1834 log.go:172] (0xc000a78b00) (0xc000abca00) Stream removed, broadcasting: 1\nI0215 00:22:33.710529    1834 log.go:172] (0xc000a78b00) (0xc000adc0a0) Stream removed, broadcasting: 5\nI0215 00:22:33.710725    1834 log.go:172] (0xc000a78b00) Go away received\nI0215 00:22:33.711659    1834 log.go:172] (0xc000a78b00) (0xc000abca00) Stream removed, broadcasting: 1\nI0215 00:22:33.711695    1834 log.go:172] (0xc000a78b00) (0xc000adc000) Stream removed, broadcasting: 3\nI0215 00:22:33.711703    1834 log.go:172] (0xc000a78b00) (0xc000adc0a0) Stream removed, broadcasting: 5\n"
Feb 15 00:22:33.721: INFO: stdout: ""
Feb 15 00:22:33.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9087 execpod775kn -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32119'
Feb 15 00:22:34.223: INFO: stderr: "I0215 00:22:33.912929    1855 log.go:172] (0xc000c0ae70) (0xc000c3e5a0) Create stream\nI0215 00:22:33.913143    1855 log.go:172] (0xc000c0ae70) (0xc000c3e5a0) Stream added, broadcasting: 1\nI0215 00:22:33.923220    1855 log.go:172] (0xc000c0ae70) Reply frame received for 1\nI0215 00:22:33.923320    1855 log.go:172] (0xc000c0ae70) (0xc000652780) Create stream\nI0215 00:22:33.923338    1855 log.go:172] (0xc000c0ae70) (0xc000652780) Stream added, broadcasting: 3\nI0215 00:22:33.924745    1855 log.go:172] (0xc000c0ae70) Reply frame received for 3\nI0215 00:22:33.924782    1855 log.go:172] (0xc000c0ae70) (0xc000759400) Create stream\nI0215 00:22:33.924788    1855 log.go:172] (0xc000c0ae70) (0xc000759400) Stream added, broadcasting: 5\nI0215 00:22:33.925963    1855 log.go:172] (0xc000c0ae70) Reply frame received for 5\nI0215 00:22:34.050847    1855 log.go:172] (0xc000c0ae70) Data frame received for 5\nI0215 00:22:34.051401    1855 log.go:172] (0xc000759400) (5) Data frame handling\nI0215 00:22:34.051468    1855 log.go:172] (0xc000759400) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 32119\nI0215 00:22:34.053207    1855 log.go:172] (0xc000c0ae70) Data frame received for 5\nI0215 00:22:34.053233    1855 log.go:172] (0xc000759400) (5) Data frame handling\nI0215 00:22:34.053266    1855 log.go:172] (0xc000759400) (5) Data frame sent\nConnection to 10.96.2.250 32119 port [tcp/32119] succeeded!\nI0215 00:22:34.199199    1855 log.go:172] (0xc000c0ae70) Data frame received for 1\nI0215 00:22:34.199361    1855 log.go:172] (0xc000c0ae70) (0xc000652780) Stream removed, broadcasting: 3\nI0215 00:22:34.200313    1855 log.go:172] (0xc000c3e5a0) (1) Data frame handling\nI0215 00:22:34.200948    1855 log.go:172] (0xc000c3e5a0) (1) Data frame sent\nI0215 00:22:34.201137    1855 log.go:172] (0xc000c0ae70) (0xc000759400) Stream removed, broadcasting: 5\nI0215 00:22:34.201370    1855 log.go:172] (0xc000c0ae70) (0xc000c3e5a0) Stream removed, broadcasting: 1\nI0215 00:22:34.203047    1855 log.go:172] (0xc000c0ae70) Go away received\nI0215 00:22:34.203205    1855 log.go:172] (0xc000c0ae70) (0xc000c3e5a0) Stream removed, broadcasting: 1\nI0215 00:22:34.203234    1855 log.go:172] (0xc000c0ae70) (0xc000652780) Stream removed, broadcasting: 3\nI0215 00:22:34.203250    1855 log.go:172] (0xc000c0ae70) (0xc000759400) Stream removed, broadcasting: 5\n"
Feb 15 00:22:34.224: INFO: stdout: ""
Feb 15 00:22:34.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9087 execpod775kn -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32119'
Feb 15 00:22:34.581: INFO: stderr: "I0215 00:22:34.413615    1875 log.go:172] (0xc000920a50) (0xc0009101e0) Create stream\nI0215 00:22:34.413734    1875 log.go:172] (0xc000920a50) (0xc0009101e0) Stream added, broadcasting: 1\nI0215 00:22:34.416416    1875 log.go:172] (0xc000920a50) Reply frame received for 1\nI0215 00:22:34.416458    1875 log.go:172] (0xc000920a50) (0xc0008a40a0) Create stream\nI0215 00:22:34.416464    1875 log.go:172] (0xc000920a50) (0xc0008a40a0) Stream added, broadcasting: 3\nI0215 00:22:34.417354    1875 log.go:172] (0xc000920a50) Reply frame received for 3\nI0215 00:22:34.417384    1875 log.go:172] (0xc000920a50) (0xc0008ac000) Create stream\nI0215 00:22:34.417393    1875 log.go:172] (0xc000920a50) (0xc0008ac000) Stream added, broadcasting: 5\nI0215 00:22:34.418565    1875 log.go:172] (0xc000920a50) Reply frame received for 5\nI0215 00:22:34.473643    1875 log.go:172] (0xc000920a50) Data frame received for 5\nI0215 00:22:34.473827    1875 log.go:172] (0xc0008ac000) (5) Data frame handling\nI0215 00:22:34.473874    1875 log.go:172] (0xc0008ac000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 32119\nI0215 00:22:34.476091    1875 log.go:172] (0xc000920a50) Data frame received for 5\nI0215 00:22:34.476130    1875 log.go:172] (0xc0008ac000) (5) Data frame handling\nI0215 00:22:34.476147    1875 log.go:172] (0xc0008ac000) (5) Data frame sent\nConnection to 10.96.1.234 32119 port [tcp/32119] succeeded!\nI0215 00:22:34.572455    1875 log.go:172] (0xc000920a50) (0xc0008ac000) Stream removed, broadcasting: 5\nI0215 00:22:34.572583    1875 log.go:172] (0xc000920a50) Data frame received for 1\nI0215 00:22:34.572678    1875 log.go:172] (0xc000920a50) (0xc0008a40a0) Stream removed, broadcasting: 3\nI0215 00:22:34.572723    1875 log.go:172] (0xc0009101e0) (1) Data frame handling\nI0215 00:22:34.572737    1875 log.go:172] (0xc0009101e0) (1) Data frame sent\nI0215 00:22:34.572741    1875 log.go:172] (0xc000920a50) (0xc0009101e0) Stream removed, broadcasting: 1\nI0215 00:22:34.572748    1875 log.go:172] (0xc000920a50) Go away received\nI0215 00:22:34.573647    1875 log.go:172] (0xc000920a50) (0xc0009101e0) Stream removed, broadcasting: 1\nI0215 00:22:34.573697    1875 log.go:172] (0xc000920a50) (0xc0008a40a0) Stream removed, broadcasting: 3\nI0215 00:22:34.573703    1875 log.go:172] (0xc000920a50) (0xc0008ac000) Stream removed, broadcasting: 5\n"
Feb 15 00:22:34.582: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:22:34.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9087" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:29.008 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":280,"completed":86,"skipped":1356,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:22:34.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 00:22:34.713: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb 15 00:22:39.745: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 15 00:22:43.788: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb 15 00:22:45.799: INFO: Creating deployment "test-rollover-deployment"
Feb 15 00:22:45.844: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb 15 00:22:47.866: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb 15 00:22:47.879: INFO: Ensure that both replica sets have 1 created replica
Feb 15 00:22:47.890: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb 15 00:22:47.899: INFO: Updating deployment test-rollover-deployment
Feb 15 00:22:47.900: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb 15 00:22:49.925: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb 15 00:22:49.932: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb 15 00:22:49.945: INFO: all replica sets need to contain the pod-template-hash label
Feb 15 00:22:49.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322967, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322967, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322968, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322965, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:22:51.954: INFO: all replica sets need to contain the pod-template-hash label
Feb 15 00:22:51.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322967, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322967, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322968, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322965, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:22:53.959: INFO: all replica sets need to contain the pod-template-hash label
Feb 15 00:22:53.959: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322967, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322967, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322968, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322965, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:22:55.956: INFO: all replica sets need to contain the pod-template-hash label
Feb 15 00:22:55.956: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322967, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322967, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322968, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322965, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:22:57.957: INFO: all replica sets need to contain the pod-template-hash label
Feb 15 00:22:57.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322967, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322967, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322977, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322965, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:22:59.959: INFO: all replica sets need to contain the pod-template-hash label
Feb 15 00:22:59.959: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322967, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322967, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322977, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322965, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:23:01.959: INFO: all replica sets need to contain the pod-template-hash label
Feb 15 00:23:01.959: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322967, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322967, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322977, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322965, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:23:03.962: INFO: all replica sets need to contain the pod-template-hash label
Feb 15 00:23:03.963: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322967, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322967, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322977, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322965, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:23:05.980: INFO: all replica sets need to contain the pod-template-hash label
Feb 15 00:23:05.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322967, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322967, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322977, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717322965, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:23:07.955: INFO: 
Feb 15 00:23:07.955: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 15 00:23:07.967: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-4846 /apis/apps/v1/namespaces/deployment-4846/deployments/test-rollover-deployment 0f0c042d-d89f-4e5a-9c52-f01cc442288d 8482051 2 2020-02-15 00:22:45 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c14998  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-15 00:22:47 +0000 UTC,LastTransitionTime:2020-02-15 00:22:47 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-02-15 00:23:07 +0000 UTC,LastTransitionTime:2020-02-15 00:22:45 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb 15 00:23:07.972: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-4846 /apis/apps/v1/namespaces/deployment-4846/replicasets/test-rollover-deployment-574d6dfbff 8ab1516d-0f2d-419f-be67-ab820f8bee81 8482040 2 2020-02-15 00:22:47 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 0f0c042d-d89f-4e5a-9c52-f01cc442288d 0xc003c14ea7 0xc003c14ea8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c14f18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 15 00:23:07.972: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb 15 00:23:07.972: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-4846 /apis/apps/v1/namespaces/deployment-4846/replicasets/test-rollover-controller 0ee1d697-9efb-4e50-9b40-7e22808892e9 8482050 2 2020-02-15 00:22:34 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 0f0c042d-d89f-4e5a-9c52-f01cc442288d 0xc003c14dc7 0xc003c14dc8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003c14e28  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 15 00:23:07.972: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-4846 /apis/apps/v1/namespaces/deployment-4846/replicasets/test-rollover-deployment-f6c94f66c 27e05313-1dd7-4170-a82d-3c701e9256eb 8481983 2 2020-02-15 00:22:45 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 0f0c042d-d89f-4e5a-9c52-f01cc442288d 0xc003c14f80 0xc003c14f81}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c14ff8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 15 00:23:07.976: INFO: Pod "test-rollover-deployment-574d6dfbff-5gtr4" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-5gtr4 test-rollover-deployment-574d6dfbff- deployment-4846 /api/v1/namespaces/deployment-4846/pods/test-rollover-deployment-574d6dfbff-5gtr4 881fe414-9a3d-4525-b7ae-39f39b0442e9 8482014 0 2020-02-15 00:22:48 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 8ab1516d-0f2d-419f-be67-ab820f8bee81 0xc003c155b7 0xc003c155b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-44tqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-44tqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-44tqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:22:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:22:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:22:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 00:22:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-15 00:22:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-15 00:22:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://37caf26064f2c19e2b5f5bab1146539ede424b8c91e0f751b35d543d90a31fea,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:23:07.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4846" for this suite.

• [SLOW TEST:33.390 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":280,"completed":87,"skipped":1385,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:23:07.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:23:08.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9826" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":280,"completed":88,"skipped":1411,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:23:08.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9109
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating stateful set ss in namespace statefulset-9109
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9109
Feb 15 00:23:08.593: INFO: Found 0 stateful pods, waiting for 1
Feb 15 00:23:18.668: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Feb 15 00:23:28.605: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 15 00:23:28.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 15 00:23:29.145: INFO: stderr: "I0215 00:23:28.804100    1891 log.go:172] (0xc0000f42c0) (0xc0000c1d60) Create stream\nI0215 00:23:28.804246    1891 log.go:172] (0xc0000f42c0) (0xc0000c1d60) Stream added, broadcasting: 1\nI0215 00:23:28.810692    1891 log.go:172] (0xc0000f42c0) Reply frame received for 1\nI0215 00:23:28.810726    1891 log.go:172] (0xc0000f42c0) (0xc0000c1f40) Create stream\nI0215 00:23:28.810733    1891 log.go:172] (0xc0000f42c0) (0xc0000c1f40) Stream added, broadcasting: 3\nI0215 00:23:28.811852    1891 log.go:172] (0xc0000f42c0) Reply frame received for 3\nI0215 00:23:28.811873    1891 log.go:172] (0xc0000f42c0) (0xc000900000) Create stream\nI0215 00:23:28.811878    1891 log.go:172] (0xc0000f42c0) (0xc000900000) Stream added, broadcasting: 5\nI0215 00:23:28.813376    1891 log.go:172] (0xc0000f42c0) Reply frame received for 5\nI0215 00:23:28.935083    1891 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0215 00:23:28.935203    1891 log.go:172] (0xc000900000) (5) Data frame handling\nI0215 00:23:28.935236    1891 log.go:172] (0xc000900000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0215 00:23:28.975427    1891 log.go:172] (0xc0000f42c0) Data frame received for 3\nI0215 00:23:28.975537    1891 log.go:172] (0xc0000c1f40) (3) Data frame handling\nI0215 00:23:28.975564    1891 log.go:172] (0xc0000c1f40) (3) Data frame sent\nI0215 00:23:29.131010    1891 log.go:172] (0xc0000f42c0) Data frame received for 1\nI0215 00:23:29.131125    1891 log.go:172] (0xc0000c1d60) (1) Data frame handling\nI0215 00:23:29.131165    1891 log.go:172] (0xc0000c1d60) (1) Data frame sent\nI0215 00:23:29.131477    1891 log.go:172] (0xc0000f42c0) (0xc000900000) Stream removed, broadcasting: 5\nI0215 00:23:29.131589    1891 log.go:172] (0xc0000f42c0) (0xc0000c1f40) Stream removed, broadcasting: 3\nI0215 00:23:29.131637    1891 log.go:172] (0xc0000f42c0) (0xc0000c1d60) Stream removed, broadcasting: 1\nI0215 00:23:29.131666    1891 log.go:172] (0xc0000f42c0) Go away received\nI0215 00:23:29.132973    1891 log.go:172] (0xc0000f42c0) (0xc0000c1d60) Stream removed, broadcasting: 1\nI0215 00:23:29.132995    1891 log.go:172] (0xc0000f42c0) (0xc0000c1f40) Stream removed, broadcasting: 3\nI0215 00:23:29.133000    1891 log.go:172] (0xc0000f42c0) (0xc000900000) Stream removed, broadcasting: 5\n"
Feb 15 00:23:29.145: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 15 00:23:29.145: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 15 00:23:29.229: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 15 00:23:29.229: INFO: Waiting for statefulset status.replicas updated to 0
Feb 15 00:23:29.397: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 15 00:23:29.397: INFO: ss-0  jerma-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:08 +0000 UTC  }]
Feb 15 00:23:29.398: INFO: 
Feb 15 00:23:29.398: INFO: StatefulSet ss has not reached scale 3, at 1
Feb 15 00:23:30.888: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994295057s
Feb 15 00:23:32.145: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.503510722s
Feb 15 00:23:33.175: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.24671588s
Feb 15 00:23:35.374: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.216756364s
Feb 15 00:23:36.978: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.017867388s
Feb 15 00:23:37.987: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.414155244s
Feb 15 00:23:38.997: INFO: Verifying statefulset ss doesn't scale past 3 for another 404.077031ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9109
Feb 15 00:23:40.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:23:40.374: INFO: stderr: "I0215 00:23:40.197460    1908 log.go:172] (0xc000111ef0) (0xc0008d8780) Create stream\nI0215 00:23:40.197642    1908 log.go:172] (0xc000111ef0) (0xc0008d8780) Stream added, broadcasting: 1\nI0215 00:23:40.209185    1908 log.go:172] (0xc000111ef0) Reply frame received for 1\nI0215 00:23:40.209252    1908 log.go:172] (0xc000111ef0) (0xc000602780) Create stream\nI0215 00:23:40.209266    1908 log.go:172] (0xc000111ef0) (0xc000602780) Stream added, broadcasting: 3\nI0215 00:23:40.210422    1908 log.go:172] (0xc000111ef0) Reply frame received for 3\nI0215 00:23:40.210450    1908 log.go:172] (0xc000111ef0) (0xc000739400) Create stream\nI0215 00:23:40.210463    1908 log.go:172] (0xc000111ef0) (0xc000739400) Stream added, broadcasting: 5\nI0215 00:23:40.211665    1908 log.go:172] (0xc000111ef0) Reply frame received for 5\nI0215 00:23:40.271096    1908 log.go:172] (0xc000111ef0) Data frame received for 5\nI0215 00:23:40.271199    1908 log.go:172] (0xc000739400) (5) Data frame handling\nI0215 00:23:40.271261    1908 log.go:172] (0xc000739400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0215 00:23:40.271613    1908 log.go:172] (0xc000111ef0) Data frame received for 3\nI0215 00:23:40.271627    1908 log.go:172] (0xc000602780) (3) Data frame handling\nI0215 00:23:40.271647    1908 log.go:172] (0xc000602780) (3) Data frame sent\nI0215 00:23:40.359442    1908 log.go:172] (0xc000111ef0) Data frame received for 1\nI0215 00:23:40.359578    1908 log.go:172] (0xc000111ef0) (0xc000739400) Stream removed, broadcasting: 5\nI0215 00:23:40.359661    1908 log.go:172] (0xc0008d8780) (1) Data frame handling\nI0215 00:23:40.359748    1908 log.go:172] (0xc0008d8780) (1) Data frame sent\nI0215 00:23:40.359932    1908 log.go:172] (0xc000111ef0) (0xc000602780) Stream removed, broadcasting: 3\nI0215 00:23:40.360155    1908 log.go:172] (0xc000111ef0) (0xc0008d8780) Stream removed, broadcasting: 1\nI0215 00:23:40.360303    1908 log.go:172] (0xc000111ef0) Go away received\nI0215 00:23:40.362790    1908 log.go:172] (0xc000111ef0) (0xc0008d8780) Stream removed, broadcasting: 1\nI0215 00:23:40.362841    1908 log.go:172] (0xc000111ef0) (0xc000602780) Stream removed, broadcasting: 3\nI0215 00:23:40.362870    1908 log.go:172] (0xc000111ef0) (0xc000739400) Stream removed, broadcasting: 5\n"
Feb 15 00:23:40.375: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 15 00:23:40.375: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 15 00:23:40.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:23:40.802: INFO: stderr: "I0215 00:23:40.535911    1930 log.go:172] (0xc000596fd0) (0xc0006da000) Create stream\nI0215 00:23:40.536514    1930 log.go:172] (0xc000596fd0) (0xc0006da000) Stream added, broadcasting: 1\nI0215 00:23:40.544333    1930 log.go:172] (0xc000596fd0) Reply frame received for 1\nI0215 00:23:40.544490    1930 log.go:172] (0xc000596fd0) (0xc0006b1cc0) Create stream\nI0215 00:23:40.544521    1930 log.go:172] (0xc000596fd0) (0xc0006b1cc0) Stream added, broadcasting: 3\nI0215 00:23:40.547250    1930 log.go:172] (0xc000596fd0) Reply frame received for 3\nI0215 00:23:40.547303    1930 log.go:172] (0xc000596fd0) (0xc0006da140) Create stream\nI0215 00:23:40.547322    1930 log.go:172] (0xc000596fd0) (0xc0006da140) Stream added, broadcasting: 5\nI0215 00:23:40.548868    1930 log.go:172] (0xc000596fd0) Reply frame received for 5\nI0215 00:23:40.646965    1930 log.go:172] (0xc000596fd0) Data frame received for 3\nI0215 00:23:40.647155    1930 log.go:172] (0xc0006b1cc0) (3) Data frame handling\nI0215 00:23:40.647212    1930 log.go:172] (0xc0006b1cc0) (3) Data frame sent\nI0215 00:23:40.647564    1930 log.go:172] (0xc000596fd0) Data frame received for 5\nI0215 00:23:40.647575    1930 log.go:172] (0xc0006da140) (5) Data frame handling\nI0215 00:23:40.647596    1930 log.go:172] (0xc0006da140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0215 00:23:40.777884    1930 log.go:172] (0xc000596fd0) Data frame received for 1\nI0215 00:23:40.778108    1930 log.go:172] (0xc000596fd0) (0xc0006b1cc0) Stream removed, broadcasting: 3\nI0215 00:23:40.778195    1930 log.go:172] (0xc0006da000) (1) Data frame handling\nI0215 00:23:40.778220    1930 log.go:172] (0xc0006da000) (1) Data frame sent\nI0215 00:23:40.778234    1930 log.go:172] (0xc000596fd0) (0xc0006da000) Stream removed, broadcasting: 1\nI0215 00:23:40.779800    1930 log.go:172] (0xc000596fd0) (0xc0006da140) Stream removed, broadcasting: 5\nI0215 00:23:40.779968    1930 log.go:172] (0xc000596fd0) Go away received\nI0215 00:23:40.780261    1930 log.go:172] (0xc000596fd0) (0xc0006da000) Stream removed, broadcasting: 1\nI0215 00:23:40.780290    1930 log.go:172] (0xc000596fd0) (0xc0006b1cc0) Stream removed, broadcasting: 3\nI0215 00:23:40.780305    1930 log.go:172] (0xc000596fd0) (0xc0006da140) Stream removed, broadcasting: 5\n"
Feb 15 00:23:40.802: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 15 00:23:40.803: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 15 00:23:40.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:23:41.158: INFO: stderr: "I0215 00:23:40.969943    1951 log.go:172] (0xc000a2b340) (0xc000af21e0) Create stream\nI0215 00:23:40.970058    1951 log.go:172] (0xc000a2b340) (0xc000af21e0) Stream added, broadcasting: 1\nI0215 00:23:40.974177    1951 log.go:172] (0xc000a2b340) Reply frame received for 1\nI0215 00:23:40.974255    1951 log.go:172] (0xc000a2b340) (0xc0009920a0) Create stream\nI0215 00:23:40.974271    1951 log.go:172] (0xc000a2b340) (0xc0009920a0) Stream added, broadcasting: 3\nI0215 00:23:40.975512    1951 log.go:172] (0xc000a2b340) Reply frame received for 3\nI0215 00:23:40.975597    1951 log.go:172] (0xc000a2b340) (0xc0009c6000) Create stream\nI0215 00:23:40.975613    1951 log.go:172] (0xc000a2b340) (0xc0009c6000) Stream added, broadcasting: 5\nI0215 00:23:40.977714    1951 log.go:172] (0xc000a2b340) Reply frame received for 5\nI0215 00:23:41.052965    1951 log.go:172] (0xc000a2b340) Data frame received for 5\nI0215 00:23:41.053237    1951 log.go:172] (0xc0009c6000) (5) Data frame handling\nI0215 00:23:41.053313    1951 log.go:172] (0xc0009c6000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0215 00:23:41.053425    1951 log.go:172] (0xc000a2b340) Data frame received for 3\nI0215 00:23:41.053440    1951 log.go:172] (0xc0009920a0) (3) Data frame handling\nI0215 00:23:41.053462    1951 log.go:172] (0xc0009920a0) (3) Data frame sent\nI0215 00:23:41.148397    1951 log.go:172] (0xc000a2b340) (0xc0009920a0) Stream removed, broadcasting: 3\nI0215 00:23:41.148812    1951 log.go:172] (0xc000a2b340) Data frame received for 1\nI0215 00:23:41.148860    1951 log.go:172] (0xc000af21e0) (1) Data frame handling\nI0215 00:23:41.148898    1951 log.go:172] (0xc000af21e0) (1) Data frame sent\nI0215 00:23:41.149058    1951 log.go:172] (0xc000a2b340) (0xc000af21e0) Stream removed, broadcasting: 1\nI0215 00:23:41.149193    1951 log.go:172] (0xc000a2b340) (0xc0009c6000) Stream removed, broadcasting: 5\nI0215 00:23:41.149356    1951 log.go:172] (0xc000a2b340) Go away received\nI0215 00:23:41.150165    1951 log.go:172] (0xc000a2b340) (0xc000af21e0) Stream removed, broadcasting: 1\nI0215 00:23:41.150178    1951 log.go:172] (0xc000a2b340) (0xc0009920a0) Stream removed, broadcasting: 3\nI0215 00:23:41.150186    1951 log.go:172] (0xc000a2b340) (0xc0009c6000) Stream removed, broadcasting: 5\n"
Feb 15 00:23:41.158: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 15 00:23:41.159: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 15 00:23:41.221: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 00:23:41.222: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 00:23:41.222: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 15 00:23:41.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 15 00:23:41.570: INFO: stderr: "I0215 00:23:41.412177    1971 log.go:172] (0xc00097f080) (0xc0009463c0) Create stream\nI0215 00:23:41.412307    1971 log.go:172] (0xc00097f080) (0xc0009463c0) Stream added, broadcasting: 1\nI0215 00:23:41.424755    1971 log.go:172] (0xc00097f080) Reply frame received for 1\nI0215 00:23:41.424827    1971 log.go:172] (0xc00097f080) (0xc000952500) Create stream\nI0215 00:23:41.424842    1971 log.go:172] (0xc00097f080) (0xc000952500) Stream added, broadcasting: 3\nI0215 00:23:41.429252    1971 log.go:172] (0xc00097f080) Reply frame received for 3\nI0215 00:23:41.429277    1971 log.go:172] (0xc00097f080) (0xc0009525a0) Create stream\nI0215 00:23:41.429296    1971 log.go:172] (0xc00097f080) (0xc0009525a0) Stream added, broadcasting: 5\nI0215 00:23:41.431291    1971 log.go:172] (0xc00097f080) Reply frame received for 5\nI0215 00:23:41.491289    1971 log.go:172] (0xc00097f080) Data frame received for 5\nI0215 00:23:41.491318    1971 log.go:172] (0xc0009525a0) (5) Data frame handling\nI0215 00:23:41.491353    1971 log.go:172] (0xc0009525a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0215 00:23:41.491434    1971 log.go:172] (0xc00097f080) Data frame received for 3\nI0215 00:23:41.491461    1971 log.go:172] (0xc000952500) (3) Data frame handling\nI0215 00:23:41.491481    1971 log.go:172] (0xc000952500) (3) Data frame sent\nI0215 00:23:41.554341    1971 log.go:172] (0xc00097f080) (0xc000952500) Stream removed, broadcasting: 3\nI0215 00:23:41.554735    1971 log.go:172] (0xc00097f080) Data frame received for 1\nI0215 00:23:41.554757    1971 log.go:172] (0xc00097f080) (0xc0009525a0) Stream removed, broadcasting: 5\nI0215 00:23:41.554785    1971 log.go:172] (0xc0009463c0) (1) Data frame handling\nI0215 00:23:41.554802    1971 log.go:172] (0xc0009463c0) (1) Data frame sent\nI0215 00:23:41.554842    1971 log.go:172] (0xc00097f080) (0xc0009463c0) Stream removed, broadcasting: 1\nI0215 00:23:41.554856    1971 log.go:172] (0xc00097f080) Go away received\nI0215 00:23:41.555498    1971 log.go:172] (0xc00097f080) (0xc0009463c0) Stream removed, broadcasting: 1\nI0215 00:23:41.555516    1971 log.go:172] (0xc00097f080) (0xc000952500) Stream removed, broadcasting: 3\nI0215 00:23:41.555522    1971 log.go:172] (0xc00097f080) (0xc0009525a0) Stream removed, broadcasting: 5\n"
Feb 15 00:23:41.571: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 15 00:23:41.571: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 15 00:23:41.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 15 00:23:41.961: INFO: stderr: "I0215 00:23:41.750101    1989 log.go:172] (0xc000c06fd0) (0xc000974820) Create stream\nI0215 00:23:41.750267    1989 log.go:172] (0xc000c06fd0) (0xc000974820) Stream added, broadcasting: 1\nI0215 00:23:41.766664    1989 log.go:172] (0xc000c06fd0) Reply frame received for 1\nI0215 00:23:41.766714    1989 log.go:172] (0xc000c06fd0) (0xc000958000) Create stream\nI0215 00:23:41.766730    1989 log.go:172] (0xc000c06fd0) (0xc000958000) Stream added, broadcasting: 3\nI0215 00:23:41.768257    1989 log.go:172] (0xc000c06fd0) Reply frame received for 3\nI0215 00:23:41.768312    1989 log.go:172] (0xc000c06fd0) (0xc0009580a0) Create stream\nI0215 00:23:41.768320    1989 log.go:172] (0xc000c06fd0) (0xc0009580a0) Stream added, broadcasting: 5\nI0215 00:23:41.770914    1989 log.go:172] (0xc000c06fd0) Reply frame received for 5\nI0215 00:23:41.831858    1989 log.go:172] (0xc000c06fd0) Data frame received for 5\nI0215 00:23:41.831900    1989 log.go:172] (0xc0009580a0) (5) Data frame handling\nI0215 00:23:41.831920    1989 log.go:172] (0xc0009580a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0215 00:23:41.863641    1989 log.go:172] (0xc000c06fd0) Data frame received for 3\nI0215 00:23:41.863695    1989 log.go:172] (0xc000958000) (3) Data frame handling\nI0215 00:23:41.863724    1989 log.go:172] (0xc000958000) (3) Data frame sent\nI0215 00:23:41.950535    1989 log.go:172] (0xc000c06fd0) Data frame received for 1\nI0215 00:23:41.951082    1989 log.go:172] (0xc000c06fd0) (0xc0009580a0) Stream removed, broadcasting: 5\nI0215 00:23:41.951158    1989 log.go:172] (0xc000974820) (1) Data frame handling\nI0215 00:23:41.951189    1989 log.go:172] (0xc000974820) (1) Data frame sent\nI0215 00:23:41.951309    1989 log.go:172] (0xc000c06fd0) (0xc000958000) Stream removed, broadcasting: 3\nI0215 00:23:41.951468    1989 log.go:172] (0xc000c06fd0) (0xc000974820) Stream removed, broadcasting: 1\nI0215 00:23:41.951534    1989 log.go:172] (0xc000c06fd0) Go away received\nI0215 00:23:41.952558    1989 log.go:172] (0xc000c06fd0) (0xc000974820) Stream removed, broadcasting: 1\nI0215 00:23:41.952570    1989 log.go:172] (0xc000c06fd0) (0xc000958000) Stream removed, broadcasting: 3\nI0215 00:23:41.952581    1989 log.go:172] (0xc000c06fd0) (0xc0009580a0) Stream removed, broadcasting: 5\n"
Feb 15 00:23:41.961: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 15 00:23:41.961: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 15 00:23:41.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 15 00:23:42.506: INFO: stderr: "I0215 00:23:42.259263    2010 log.go:172] (0xc000bf2160) (0xc000bd80a0) Create stream\nI0215 00:23:42.259640    2010 log.go:172] (0xc000bf2160) (0xc000bd80a0) Stream added, broadcasting: 1\nI0215 00:23:42.264389    2010 log.go:172] (0xc000bf2160) Reply frame received for 1\nI0215 00:23:42.264496    2010 log.go:172] (0xc000bf2160) (0xc000a1a000) Create stream\nI0215 00:23:42.264525    2010 log.go:172] (0xc000bf2160) (0xc000a1a000) Stream added, broadcasting: 3\nI0215 00:23:42.268237    2010 log.go:172] (0xc000bf2160) Reply frame received for 3\nI0215 00:23:42.268307    2010 log.go:172] (0xc000bf2160) (0xc000631ea0) Create stream\nI0215 00:23:42.268326    2010 log.go:172] (0xc000bf2160) (0xc000631ea0) Stream added, broadcasting: 5\nI0215 00:23:42.269874    2010 log.go:172] (0xc000bf2160) Reply frame received for 5\nI0215 00:23:42.368044    2010 log.go:172] (0xc000bf2160) Data frame received for 5\nI0215 00:23:42.368141    2010 log.go:172] (0xc000631ea0) (5) Data frame handling\nI0215 00:23:42.368184    2010 log.go:172] (0xc000631ea0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0215 00:23:42.402464    2010 log.go:172] (0xc000bf2160) Data frame received for 3\nI0215 00:23:42.402506    2010 log.go:172] (0xc000a1a000) (3) Data frame handling\nI0215 00:23:42.402527    2010 log.go:172] (0xc000a1a000) (3) Data frame sent\nI0215 00:23:42.477717    2010 log.go:172] (0xc000bf2160) Data frame received for 1\nI0215 00:23:42.477908    2010 log.go:172] (0xc000bf2160) (0xc000631ea0) Stream removed, broadcasting: 5\nI0215 00:23:42.478155    2010 log.go:172] (0xc000bf2160) (0xc000a1a000) Stream removed, broadcasting: 3\nI0215 00:23:42.478434    2010 log.go:172] (0xc000bd80a0) (1) Data frame handling\nI0215 00:23:42.478482    2010 log.go:172] (0xc000bd80a0) (1) Data frame sent\nI0215 00:23:42.478504    2010 log.go:172] (0xc000bf2160) (0xc000bd80a0) Stream removed, broadcasting: 1\nI0215 00:23:42.478543    2010 log.go:172] (0xc000bf2160) Go away received\nI0215 00:23:42.485712    2010 log.go:172] (0xc000bf2160) (0xc000bd80a0) Stream removed, broadcasting: 1\nI0215 00:23:42.485931    2010 log.go:172] (0xc000bf2160) (0xc000a1a000) Stream removed, broadcasting: 3\nI0215 00:23:42.485985    2010 log.go:172] (0xc000bf2160) (0xc000631ea0) Stream removed, broadcasting: 5\n"
Feb 15 00:23:42.506: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 15 00:23:42.506: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 15 00:23:42.506: INFO: Waiting for statefulset status.replicas updated to 0
Feb 15 00:23:42.527: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 15 00:23:52.548: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 15 00:23:52.548: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 15 00:23:52.548: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 15 00:23:52.570: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 15 00:23:52.570: INFO: ss-0  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:08 +0000 UTC  }]
Feb 15 00:23:52.570: INFO: ss-1  jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  }]
Feb 15 00:23:52.570: INFO: ss-2  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  }]
Feb 15 00:23:52.570: INFO: 
Feb 15 00:23:52.570: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 15 00:23:54.557: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 15 00:23:54.557: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:08 +0000 UTC  }]
Feb 15 00:23:54.557: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  }]
Feb 15 00:23:54.558: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  }]
Feb 15 00:23:54.558: INFO: 
Feb 15 00:23:54.558: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 15 00:23:55.571: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 15 00:23:55.571: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:08 +0000 UTC  }]
Feb 15 00:23:55.571: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  }]
Feb 15 00:23:55.571: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  }]
Feb 15 00:23:55.571: INFO: 
Feb 15 00:23:55.571: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 15 00:23:56.585: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 15 00:23:56.585: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:08 +0000 UTC  }]
Feb 15 00:23:56.585: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  }]
Feb 15 00:23:56.585: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  }]
Feb 15 00:23:56.585: INFO: 
Feb 15 00:23:56.585: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 15 00:23:57.600: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 15 00:23:57.600: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:08 +0000 UTC  }]
Feb 15 00:23:57.601: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  }]
Feb 15 00:23:57.601: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  }]
Feb 15 00:23:57.601: INFO: 
Feb 15 00:23:57.601: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 15 00:23:58.607: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 15 00:23:58.607: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:08 +0000 UTC  }]
Feb 15 00:23:58.607: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  }]
Feb 15 00:23:58.607: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  }]
Feb 15 00:23:58.607: INFO: 
Feb 15 00:23:58.607: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 15 00:23:59.614: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 15 00:23:59.614: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:08 +0000 UTC  }]
Feb 15 00:23:59.615: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  }]
Feb 15 00:23:59.615: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  }]
Feb 15 00:23:59.615: INFO: 
Feb 15 00:23:59.615: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 15 00:24:00.621: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 15 00:24:00.622: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:08 +0000 UTC  }]
Feb 15 00:24:00.622: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  }]
Feb 15 00:24:00.622: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  }]
Feb 15 00:24:00.622: INFO: 
Feb 15 00:24:00.622: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 15 00:24:01.630: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 15 00:24:01.630: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:08 +0000 UTC  }]
Feb 15 00:24:01.630: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  }]
Feb 15 00:24:01.630: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 00:23:29 +0000 UTC  }]
Feb 15 00:24:01.630: INFO: 
Feb 15 00:24:01.630: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9109
Feb 15 00:24:02.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:24:02.825: INFO: rc: 1
Feb 15 00:24:02.826: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Feb 15 00:24:12.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:24:13.005: INFO: rc: 1
Feb 15 00:24:13.005: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:24:23.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:24:23.129: INFO: rc: 1
Feb 15 00:24:23.129: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:24:33.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:24:33.260: INFO: rc: 1
Feb 15 00:24:33.260: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:24:43.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:24:43.492: INFO: rc: 1
Feb 15 00:24:43.492: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:24:53.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:24:53.690: INFO: rc: 1
Feb 15 00:24:53.691: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:25:03.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:25:03.911: INFO: rc: 1
Feb 15 00:25:03.912: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:25:13.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:25:14.107: INFO: rc: 1
Feb 15 00:25:14.107: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:25:24.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:25:24.205: INFO: rc: 1
Feb 15 00:25:24.206: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:25:34.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:25:34.304: INFO: rc: 1
Feb 15 00:25:34.305: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:25:44.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:25:44.450: INFO: rc: 1
Feb 15 00:25:44.450: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:25:54.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:25:54.602: INFO: rc: 1
Feb 15 00:25:54.603: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:26:04.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:26:04.752: INFO: rc: 1
Feb 15 00:26:04.752: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:26:14.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:26:14.974: INFO: rc: 1
Feb 15 00:26:14.975: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:26:24.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:26:25.152: INFO: rc: 1
Feb 15 00:26:25.153: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:26:35.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:26:35.307: INFO: rc: 1
Feb 15 00:26:35.308: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:26:45.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:26:45.420: INFO: rc: 1
Feb 15 00:26:45.420: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:26:55.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:26:55.593: INFO: rc: 1
Feb 15 00:26:55.593: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:27:05.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:27:05.741: INFO: rc: 1
Feb 15 00:27:05.742: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:27:15.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:27:15.940: INFO: rc: 1
Feb 15 00:27:15.940: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:27:25.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:27:26.109: INFO: rc: 1
Feb 15 00:27:26.109: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:27:36.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:27:36.240: INFO: rc: 1
Feb 15 00:27:36.240: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:27:46.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:27:46.419: INFO: rc: 1
Feb 15 00:27:46.420: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:27:56.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:27:56.934: INFO: rc: 1
Feb 15 00:27:56.935: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:28:06.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:28:07.109: INFO: rc: 1
Feb 15 00:28:07.109: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:28:17.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:28:17.300: INFO: rc: 1
Feb 15 00:28:17.300: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:28:27.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:28:27.438: INFO: rc: 1
Feb 15 00:28:27.439: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:28:37.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:28:37.554: INFO: rc: 1
Feb 15 00:28:37.554: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:28:47.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:28:47.980: INFO: rc: 1
Feb 15 00:28:47.980: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:28:57.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:28:58.154: INFO: rc: 1
Feb 15 00:28:58.155: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 15 00:29:08.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:29:08.296: INFO: rc: 1
Feb 15 00:29:08.296: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: 
Feb 15 00:29:08.296: INFO: Scaling statefulset ss to 0
Feb 15 00:29:08.314: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 15 00:29:08.317: INFO: Deleting all statefulset in ns statefulset-9109
Feb 15 00:29:08.320: INFO: Scaling statefulset ss to 0
Feb 15 00:29:08.332: INFO: Waiting for statefulset status.replicas updated to 0
Feb 15 00:29:08.335: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:29:08.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9109" for this suite.

• [SLOW TEST:360.077 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":280,"completed":89,"skipped":1422,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:29:08.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6027.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6027.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6027.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6027.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 15 00:29:20.698: INFO: DNS probes using dns-test-cd4c3ab1-2176-4a4b-a5dc-0acc0cfd06c2 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6027.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6027.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6027.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6027.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 15 00:29:32.900: INFO: File wheezy_udp@dns-test-service-3.dns-6027.svc.cluster.local from pod  dns-6027/dns-test-b7bffdf4-bffa-4e83-a3ef-e577bf02647e contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 15 00:29:32.907: INFO: File jessie_udp@dns-test-service-3.dns-6027.svc.cluster.local from pod  dns-6027/dns-test-b7bffdf4-bffa-4e83-a3ef-e577bf02647e contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 15 00:29:32.907: INFO: Lookups using dns-6027/dns-test-b7bffdf4-bffa-4e83-a3ef-e577bf02647e failed for: [wheezy_udp@dns-test-service-3.dns-6027.svc.cluster.local jessie_udp@dns-test-service-3.dns-6027.svc.cluster.local]

Feb 15 00:29:37.919: INFO: File wheezy_udp@dns-test-service-3.dns-6027.svc.cluster.local from pod  dns-6027/dns-test-b7bffdf4-bffa-4e83-a3ef-e577bf02647e contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 15 00:29:37.927: INFO: File jessie_udp@dns-test-service-3.dns-6027.svc.cluster.local from pod  dns-6027/dns-test-b7bffdf4-bffa-4e83-a3ef-e577bf02647e contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 15 00:29:37.927: INFO: Lookups using dns-6027/dns-test-b7bffdf4-bffa-4e83-a3ef-e577bf02647e failed for: [wheezy_udp@dns-test-service-3.dns-6027.svc.cluster.local jessie_udp@dns-test-service-3.dns-6027.svc.cluster.local]

Feb 15 00:29:42.918: INFO: File wheezy_udp@dns-test-service-3.dns-6027.svc.cluster.local from pod  dns-6027/dns-test-b7bffdf4-bffa-4e83-a3ef-e577bf02647e contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 15 00:29:42.924: INFO: File jessie_udp@dns-test-service-3.dns-6027.svc.cluster.local from pod  dns-6027/dns-test-b7bffdf4-bffa-4e83-a3ef-e577bf02647e contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 15 00:29:42.924: INFO: Lookups using dns-6027/dns-test-b7bffdf4-bffa-4e83-a3ef-e577bf02647e failed for: [wheezy_udp@dns-test-service-3.dns-6027.svc.cluster.local jessie_udp@dns-test-service-3.dns-6027.svc.cluster.local]

Feb 15 00:29:47.935: INFO: File jessie_udp@dns-test-service-3.dns-6027.svc.cluster.local from pod  dns-6027/dns-test-b7bffdf4-bffa-4e83-a3ef-e577bf02647e contains '' instead of 'bar.example.com.'
Feb 15 00:29:47.935: INFO: Lookups using dns-6027/dns-test-b7bffdf4-bffa-4e83-a3ef-e577bf02647e failed for: [jessie_udp@dns-test-service-3.dns-6027.svc.cluster.local]

Feb 15 00:29:52.933: INFO: DNS probes using dns-test-b7bffdf4-bffa-4e83-a3ef-e577bf02647e succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6027.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6027.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6027.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6027.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 15 00:30:11.245: INFO: DNS probes using dns-test-97b72222-080e-4f57-958d-5bde907b8be7 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:30:11.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6027" for this suite.

• [SLOW TEST:63.075 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":280,"completed":90,"skipped":1443,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:30:11.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 15 00:30:11.628: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49e98c72-f459-41c2-8150-9d5c1997d986" in namespace "projected-91" to be "success or failure"
Feb 15 00:30:11.645: INFO: Pod "downwardapi-volume-49e98c72-f459-41c2-8150-9d5c1997d986": Phase="Pending", Reason="", readiness=false. Elapsed: 16.820694ms
Feb 15 00:30:13.658: INFO: Pod "downwardapi-volume-49e98c72-f459-41c2-8150-9d5c1997d986": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029294508s
Feb 15 00:30:15.665: INFO: Pod "downwardapi-volume-49e98c72-f459-41c2-8150-9d5c1997d986": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036418339s
Feb 15 00:30:17.691: INFO: Pod "downwardapi-volume-49e98c72-f459-41c2-8150-9d5c1997d986": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062917598s
Feb 15 00:30:19.700: INFO: Pod "downwardapi-volume-49e98c72-f459-41c2-8150-9d5c1997d986": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071403415s
Feb 15 00:30:21.712: INFO: Pod "downwardapi-volume-49e98c72-f459-41c2-8150-9d5c1997d986": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.083612912s
STEP: Saw pod success
Feb 15 00:30:21.712: INFO: Pod "downwardapi-volume-49e98c72-f459-41c2-8150-9d5c1997d986" satisfied condition "success or failure"
Feb 15 00:30:21.719: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-49e98c72-f459-41c2-8150-9d5c1997d986 container client-container: 
STEP: delete the pod
Feb 15 00:30:21.942: INFO: Waiting for pod downwardapi-volume-49e98c72-f459-41c2-8150-9d5c1997d986 to disappear
Feb 15 00:30:21.948: INFO: Pod downwardapi-volume-49e98c72-f459-41c2-8150-9d5c1997d986 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:30:21.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-91" for this suite.

• [SLOW TEST:10.532 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":91,"skipped":1499,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:30:21.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4531.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4531.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4531.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4531.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4531.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4531.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4531.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4531.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4531.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4531.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4531.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 108.87.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.87.108_udp@PTR;check="$$(dig +tcp +noall +answer +search 108.87.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.87.108_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4531.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4531.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4531.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4531.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4531.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4531.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4531.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4531.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4531.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4531.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4531.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 108.87.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.87.108_udp@PTR;check="$$(dig +tcp +noall +answer +search 108.87.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.87.108_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 15 00:30:32.293: INFO: Unable to read wheezy_udp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:32.298: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:32.305: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:32.309: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:32.331: INFO: Unable to read jessie_udp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:32.334: INFO: Unable to read jessie_tcp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:32.337: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:32.339: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:32.355: INFO: Lookups using dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c failed for: [wheezy_udp@dns-test-service.dns-4531.svc.cluster.local wheezy_tcp@dns-test-service.dns-4531.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local jessie_udp@dns-test-service.dns-4531.svc.cluster.local jessie_tcp@dns-test-service.dns-4531.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local]

Feb 15 00:30:37.369: INFO: Unable to read wheezy_udp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:37.376: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:37.383: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:37.388: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:37.425: INFO: Unable to read jessie_udp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:37.430: INFO: Unable to read jessie_tcp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:37.435: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:37.439: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:37.467: INFO: Lookups using dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c failed for: [wheezy_udp@dns-test-service.dns-4531.svc.cluster.local wheezy_tcp@dns-test-service.dns-4531.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local jessie_udp@dns-test-service.dns-4531.svc.cluster.local jessie_tcp@dns-test-service.dns-4531.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local]

Feb 15 00:30:42.367: INFO: Unable to read wheezy_udp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:42.372: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:42.376: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:42.379: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:42.406: INFO: Unable to read jessie_udp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:42.409: INFO: Unable to read jessie_tcp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:42.413: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:42.416: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:42.437: INFO: Lookups using dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c failed for: [wheezy_udp@dns-test-service.dns-4531.svc.cluster.local wheezy_tcp@dns-test-service.dns-4531.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local jessie_udp@dns-test-service.dns-4531.svc.cluster.local jessie_tcp@dns-test-service.dns-4531.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local]

Feb 15 00:30:47.381: INFO: Unable to read wheezy_udp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:47.473: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:47.503: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:47.511: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:47.556: INFO: Unable to read jessie_udp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:47.560: INFO: Unable to read jessie_tcp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:47.564: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:47.600: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:47.633: INFO: Lookups using dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c failed for: [wheezy_udp@dns-test-service.dns-4531.svc.cluster.local wheezy_tcp@dns-test-service.dns-4531.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local jessie_udp@dns-test-service.dns-4531.svc.cluster.local jessie_tcp@dns-test-service.dns-4531.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local]

Feb 15 00:30:52.370: INFO: Unable to read wheezy_udp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:52.376: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:52.380: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:52.384: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:52.442: INFO: Unable to read jessie_udp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:52.449: INFO: Unable to read jessie_tcp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:52.455: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:52.461: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:52.480: INFO: Lookups using dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c failed for: [wheezy_udp@dns-test-service.dns-4531.svc.cluster.local wheezy_tcp@dns-test-service.dns-4531.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local jessie_udp@dns-test-service.dns-4531.svc.cluster.local jessie_tcp@dns-test-service.dns-4531.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local]

Feb 15 00:30:57.366: INFO: Unable to read wheezy_udp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:57.374: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:57.379: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:57.384: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:57.434: INFO: Unable to read jessie_udp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:57.441: INFO: Unable to read jessie_tcp@dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:57.447: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:57.452: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local from pod dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c: the server could not find the requested resource (get pods dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c)
Feb 15 00:30:57.480: INFO: Lookups using dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c failed for: [wheezy_udp@dns-test-service.dns-4531.svc.cluster.local wheezy_tcp@dns-test-service.dns-4531.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local jessie_udp@dns-test-service.dns-4531.svc.cluster.local jessie_tcp@dns-test-service.dns-4531.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4531.svc.cluster.local]

Feb 15 00:31:02.413: INFO: DNS probes using dns-4531/dns-test-9f26d279-cfa4-47a9-83f5-c639c0174b6c succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:31:02.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4531" for this suite.

• [SLOW TEST:40.742 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":280,"completed":92,"skipped":1519,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:31:02.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 00:31:02.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 15 00:31:03.149: INFO: stderr: ""
Feb 15 00:31:03.149: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18+\", GitVersion:\"v1.18.0-alpha.2.152+426b3538900329\", GitCommit:\"426b3538900329ed2ce5a0cb1cccf2f0ff32db60\", GitTreeState:\"clean\", BuildDate:\"2020-01-25T12:55:25Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:31:03.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2672" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":280,"completed":93,"skipped":1528,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:31:03.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 15 00:31:03.410: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 15 00:31:03.448: INFO: Waiting for terminating namespaces to be deleted...
Feb 15 00:31:03.452: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 15 00:31:03.463: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 15 00:31:03.463: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 15 00:31:03.463: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 15 00:31:03.463: INFO: 	Container weave ready: true, restart count 1
Feb 15 00:31:03.463: INFO: 	Container weave-npc ready: true, restart count 0
Feb 15 00:31:03.463: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 15 00:31:03.486: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 15 00:31:03.487: INFO: 	Container kube-scheduler ready: true, restart count 11
Feb 15 00:31:03.487: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 15 00:31:03.487: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 15 00:31:03.487: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 15 00:31:03.487: INFO: 	Container etcd ready: true, restart count 1
Feb 15 00:31:03.487: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 15 00:31:03.487: INFO: 	Container coredns ready: true, restart count 0
Feb 15 00:31:03.487: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 15 00:31:03.487: INFO: 	Container coredns ready: true, restart count 0
Feb 15 00:31:03.487: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 15 00:31:03.487: INFO: 	Container weave ready: true, restart count 0
Feb 15 00:31:03.487: INFO: 	Container weave-npc ready: true, restart count 0
Feb 15 00:31:03.487: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 15 00:31:03.487: INFO: 	Container kube-controller-manager ready: true, restart count 7
Feb 15 00:31:03.487: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 15 00:31:03.487: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-e462fbe4-a5c5-40b1-9819-6159c948b95b 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-e462fbe4-a5c5-40b1-9819-6159c948b95b off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-e462fbe4-a5c5-40b1-9819-6159c948b95b
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:31:37.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7154" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:34.736 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":280,"completed":94,"skipped":1557,"failed":0}
S
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:31:37.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap that has name configmap-test-emptyKey-075c5b51-af81-4891-bf5e-08a813eb7b91
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:31:38.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3224" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":280,"completed":95,"skipped":1558,"failed":0}

------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:31:38.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:31:50.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3597" for this suite.

• [SLOW TEST:12.154 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":280,"completed":96,"skipped":1558,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:31:50.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-66667c0f-96f8-422d-b9bb-e121c01f2e27
STEP: Creating a pod to test consume configMaps
Feb 15 00:31:50.872: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ba8164c4-fbab-436f-bd6c-d4651c360c66" in namespace "projected-2898" to be "success or failure"
Feb 15 00:31:50.957: INFO: Pod "pod-projected-configmaps-ba8164c4-fbab-436f-bd6c-d4651c360c66": Phase="Pending", Reason="", readiness=false. Elapsed: 84.511146ms
Feb 15 00:31:52.964: INFO: Pod "pod-projected-configmaps-ba8164c4-fbab-436f-bd6c-d4651c360c66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091312801s
Feb 15 00:31:54.971: INFO: Pod "pod-projected-configmaps-ba8164c4-fbab-436f-bd6c-d4651c360c66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098448811s
Feb 15 00:31:56.979: INFO: Pod "pod-projected-configmaps-ba8164c4-fbab-436f-bd6c-d4651c360c66": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106438122s
Feb 15 00:31:58.986: INFO: Pod "pod-projected-configmaps-ba8164c4-fbab-436f-bd6c-d4651c360c66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.11317276s
STEP: Saw pod success
Feb 15 00:31:58.986: INFO: Pod "pod-projected-configmaps-ba8164c4-fbab-436f-bd6c-d4651c360c66" satisfied condition "success or failure"
Feb 15 00:31:58.989: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-ba8164c4-fbab-436f-bd6c-d4651c360c66 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 15 00:31:59.065: INFO: Waiting for pod pod-projected-configmaps-ba8164c4-fbab-436f-bd6c-d4651c360c66 to disappear
Feb 15 00:31:59.180: INFO: Pod pod-projected-configmaps-ba8164c4-fbab-436f-bd6c-d4651c360c66 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:31:59.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2898" for this suite.

• [SLOW TEST:8.927 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":97,"skipped":1565,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:31:59.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-cbdadef2-0e18-4a84-bc6a-670e91505d46
STEP: Creating a pod to test consume configMaps
Feb 15 00:31:59.407: INFO: Waiting up to 5m0s for pod "pod-configmaps-96424e8d-6aa1-4da8-a74b-9034c7c52da7" in namespace "configmap-9827" to be "success or failure"
Feb 15 00:31:59.425: INFO: Pod "pod-configmaps-96424e8d-6aa1-4da8-a74b-9034c7c52da7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.244222ms
Feb 15 00:32:01.450: INFO: Pod "pod-configmaps-96424e8d-6aa1-4da8-a74b-9034c7c52da7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043246336s
Feb 15 00:32:03.460: INFO: Pod "pod-configmaps-96424e8d-6aa1-4da8-a74b-9034c7c52da7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053045278s
Feb 15 00:32:05.468: INFO: Pod "pod-configmaps-96424e8d-6aa1-4da8-a74b-9034c7c52da7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061387675s
Feb 15 00:32:07.473: INFO: Pod "pod-configmaps-96424e8d-6aa1-4da8-a74b-9034c7c52da7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066359767s
STEP: Saw pod success
Feb 15 00:32:07.474: INFO: Pod "pod-configmaps-96424e8d-6aa1-4da8-a74b-9034c7c52da7" satisfied condition "success or failure"
Feb 15 00:32:07.476: INFO: Trying to get logs from node jerma-node pod pod-configmaps-96424e8d-6aa1-4da8-a74b-9034c7c52da7 container configmap-volume-test: 
STEP: delete the pod
Feb 15 00:32:07.559: INFO: Waiting for pod pod-configmaps-96424e8d-6aa1-4da8-a74b-9034c7c52da7 to disappear
Feb 15 00:32:07.641: INFO: Pod pod-configmaps-96424e8d-6aa1-4da8-a74b-9034c7c52da7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:32:07.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9827" for this suite.

• [SLOW TEST:8.488 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":98,"skipped":1566,"failed":0}
SS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:32:07.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:32:20.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5776" for this suite.

• [SLOW TEST:13.302 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":280,"completed":99,"skipped":1568,"failed":0}
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:32:20.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:32:33.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5589" for this suite.

• [SLOW TEST:12.165 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":280,"completed":100,"skipped":1568,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:32:33.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Feb 15 00:32:33.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Feb 15 00:32:44.301: INFO: >>> kubeConfig: /root/.kube/config
Feb 15 00:32:47.125: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:32:59.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6589" for this suite.

• [SLOW TEST:26.189 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":280,"completed":101,"skipped":1575,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:32:59.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 15 00:32:59.441: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49bdfa96-6c85-4d1c-9021-9db9919f5928" in namespace "projected-4285" to be "success or failure"
Feb 15 00:32:59.459: INFO: Pod "downwardapi-volume-49bdfa96-6c85-4d1c-9021-9db9919f5928": Phase="Pending", Reason="", readiness=false. Elapsed: 18.664758ms
Feb 15 00:33:01.467: INFO: Pod "downwardapi-volume-49bdfa96-6c85-4d1c-9021-9db9919f5928": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025808419s
Feb 15 00:33:03.479: INFO: Pod "downwardapi-volume-49bdfa96-6c85-4d1c-9021-9db9919f5928": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038036558s
Feb 15 00:33:05.493: INFO: Pod "downwardapi-volume-49bdfa96-6c85-4d1c-9021-9db9919f5928": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052542008s
Feb 15 00:33:07.498: INFO: Pod "downwardapi-volume-49bdfa96-6c85-4d1c-9021-9db9919f5928": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05754714s
STEP: Saw pod success
Feb 15 00:33:07.498: INFO: Pod "downwardapi-volume-49bdfa96-6c85-4d1c-9021-9db9919f5928" satisfied condition "success or failure"
Feb 15 00:33:07.501: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-49bdfa96-6c85-4d1c-9021-9db9919f5928 container client-container: 
STEP: delete the pod
Feb 15 00:33:07.634: INFO: Waiting for pod downwardapi-volume-49bdfa96-6c85-4d1c-9021-9db9919f5928 to disappear
Feb 15 00:33:07.637: INFO: Pod downwardapi-volume-49bdfa96-6c85-4d1c-9021-9db9919f5928 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:33:07.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4285" for this suite.

• [SLOW TEST:8.307 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":280,"completed":102,"skipped":1630,"failed":0}
SS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:33:07.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 00:33:07.814: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-98a4b7c9-d8ba-4ea7-bc14-c42f76d9be99" in namespace "security-context-test-3309" to be "success or failure"
Feb 15 00:33:07.837: INFO: Pod "busybox-privileged-false-98a4b7c9-d8ba-4ea7-bc14-c42f76d9be99": Phase="Pending", Reason="", readiness=false. Elapsed: 22.731067ms
Feb 15 00:33:09.848: INFO: Pod "busybox-privileged-false-98a4b7c9-d8ba-4ea7-bc14-c42f76d9be99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033872847s
Feb 15 00:33:11.860: INFO: Pod "busybox-privileged-false-98a4b7c9-d8ba-4ea7-bc14-c42f76d9be99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045787164s
Feb 15 00:33:13.871: INFO: Pod "busybox-privileged-false-98a4b7c9-d8ba-4ea7-bc14-c42f76d9be99": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056818278s
Feb 15 00:33:15.894: INFO: Pod "busybox-privileged-false-98a4b7c9-d8ba-4ea7-bc14-c42f76d9be99": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080018119s
Feb 15 00:33:17.903: INFO: Pod "busybox-privileged-false-98a4b7c9-d8ba-4ea7-bc14-c42f76d9be99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089016676s
Feb 15 00:33:17.903: INFO: Pod "busybox-privileged-false-98a4b7c9-d8ba-4ea7-bc14-c42f76d9be99" satisfied condition "success or failure"
Feb 15 00:33:17.922: INFO: Got logs for pod "busybox-privileged-false-98a4b7c9-d8ba-4ea7-bc14-c42f76d9be99": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:33:17.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3309" for this suite.

• [SLOW TEST:10.290 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":103,"skipped":1632,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:33:17.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override arguments
Feb 15 00:33:18.060: INFO: Waiting up to 5m0s for pod "client-containers-dc99f0e2-01d5-4836-a3d3-4f24af4b9f7a" in namespace "containers-923" to be "success or failure"
Feb 15 00:33:18.070: INFO: Pod "client-containers-dc99f0e2-01d5-4836-a3d3-4f24af4b9f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.534359ms
Feb 15 00:33:20.256: INFO: Pod "client-containers-dc99f0e2-01d5-4836-a3d3-4f24af4b9f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196219784s
Feb 15 00:33:22.262: INFO: Pod "client-containers-dc99f0e2-01d5-4836-a3d3-4f24af4b9f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202529746s
Feb 15 00:33:24.270: INFO: Pod "client-containers-dc99f0e2-01d5-4836-a3d3-4f24af4b9f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209942429s
Feb 15 00:33:26.283: INFO: Pod "client-containers-dc99f0e2-01d5-4836-a3d3-4f24af4b9f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.223540699s
Feb 15 00:33:28.292: INFO: Pod "client-containers-dc99f0e2-01d5-4836-a3d3-4f24af4b9f7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.232228027s
STEP: Saw pod success
Feb 15 00:33:28.293: INFO: Pod "client-containers-dc99f0e2-01d5-4836-a3d3-4f24af4b9f7a" satisfied condition "success or failure"
Feb 15 00:33:28.298: INFO: Trying to get logs from node jerma-node pod client-containers-dc99f0e2-01d5-4836-a3d3-4f24af4b9f7a container test-container: 
STEP: delete the pod
Feb 15 00:33:28.355: INFO: Waiting for pod client-containers-dc99f0e2-01d5-4836-a3d3-4f24af4b9f7a to disappear
Feb 15 00:33:28.364: INFO: Pod client-containers-dc99f0e2-01d5-4836-a3d3-4f24af4b9f7a no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:33:28.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-923" for this suite.

• [SLOW TEST:10.430 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":280,"completed":104,"skipped":1655,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:33:28.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-874dda6f-c4c5-49ed-b2c5-1a9f9f010336
STEP: Creating a pod to test consume secrets
Feb 15 00:33:28.523: INFO: Waiting up to 5m0s for pod "pod-secrets-6c313fc4-8993-4ddd-9f00-143c97826248" in namespace "secrets-7568" to be "success or failure"
Feb 15 00:33:28.539: INFO: Pod "pod-secrets-6c313fc4-8993-4ddd-9f00-143c97826248": Phase="Pending", Reason="", readiness=false. Elapsed: 16.139035ms
Feb 15 00:33:30.582: INFO: Pod "pod-secrets-6c313fc4-8993-4ddd-9f00-143c97826248": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059094949s
Feb 15 00:33:32.596: INFO: Pod "pod-secrets-6c313fc4-8993-4ddd-9f00-143c97826248": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072536263s
Feb 15 00:33:34.603: INFO: Pod "pod-secrets-6c313fc4-8993-4ddd-9f00-143c97826248": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080280652s
Feb 15 00:33:36.613: INFO: Pod "pod-secrets-6c313fc4-8993-4ddd-9f00-143c97826248": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089942391s
STEP: Saw pod success
Feb 15 00:33:36.613: INFO: Pod "pod-secrets-6c313fc4-8993-4ddd-9f00-143c97826248" satisfied condition "success or failure"
Feb 15 00:33:36.618: INFO: Trying to get logs from node jerma-node pod pod-secrets-6c313fc4-8993-4ddd-9f00-143c97826248 container secret-volume-test: 
STEP: delete the pod
Feb 15 00:33:36.676: INFO: Waiting for pod pod-secrets-6c313fc4-8993-4ddd-9f00-143c97826248 to disappear
Feb 15 00:33:36.685: INFO: Pod pod-secrets-6c313fc4-8993-4ddd-9f00-143c97826248 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:33:36.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7568" for this suite.

• [SLOW TEST:8.325 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":105,"skipped":1689,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:33:36.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 15 00:33:37.061: INFO: Waiting up to 5m0s for pod "pod-578e150e-a4dd-433e-ad87-62135bc69a24" in namespace "emptydir-3024" to be "success or failure"
Feb 15 00:33:37.427: INFO: Pod "pod-578e150e-a4dd-433e-ad87-62135bc69a24": Phase="Pending", Reason="", readiness=false. Elapsed: 366.345442ms
Feb 15 00:33:39.435: INFO: Pod "pod-578e150e-a4dd-433e-ad87-62135bc69a24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.374209403s
Feb 15 00:33:41.463: INFO: Pod "pod-578e150e-a4dd-433e-ad87-62135bc69a24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.402000535s
Feb 15 00:33:43.472: INFO: Pod "pod-578e150e-a4dd-433e-ad87-62135bc69a24": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411207413s
Feb 15 00:33:45.478: INFO: Pod "pod-578e150e-a4dd-433e-ad87-62135bc69a24": Phase="Pending", Reason="", readiness=false. Elapsed: 8.417373064s
Feb 15 00:33:47.486: INFO: Pod "pod-578e150e-a4dd-433e-ad87-62135bc69a24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.424830137s
STEP: Saw pod success
Feb 15 00:33:47.486: INFO: Pod "pod-578e150e-a4dd-433e-ad87-62135bc69a24" satisfied condition "success or failure"
Feb 15 00:33:47.492: INFO: Trying to get logs from node jerma-node pod pod-578e150e-a4dd-433e-ad87-62135bc69a24 container test-container: 
STEP: delete the pod
Feb 15 00:33:47.968: INFO: Waiting for pod pod-578e150e-a4dd-433e-ad87-62135bc69a24 to disappear
Feb 15 00:33:47.983: INFO: Pod pod-578e150e-a4dd-433e-ad87-62135bc69a24 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:33:47.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3024" for this suite.

• [SLOW TEST:11.302 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":106,"skipped":1690,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:33:48.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test env composition
Feb 15 00:33:48.116: INFO: Waiting up to 5m0s for pod "var-expansion-250a71a9-5794-4321-b647-fd9893eefe94" in namespace "var-expansion-8703" to be "success or failure"
Feb 15 00:33:48.125: INFO: Pod "var-expansion-250a71a9-5794-4321-b647-fd9893eefe94": Phase="Pending", Reason="", readiness=false. Elapsed: 9.098057ms
Feb 15 00:33:50.136: INFO: Pod "var-expansion-250a71a9-5794-4321-b647-fd9893eefe94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019206976s
Feb 15 00:33:52.147: INFO: Pod "var-expansion-250a71a9-5794-4321-b647-fd9893eefe94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031086485s
Feb 15 00:33:54.155: INFO: Pod "var-expansion-250a71a9-5794-4321-b647-fd9893eefe94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038763634s
Feb 15 00:33:56.163: INFO: Pod "var-expansion-250a71a9-5794-4321-b647-fd9893eefe94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046534701s
STEP: Saw pod success
Feb 15 00:33:56.163: INFO: Pod "var-expansion-250a71a9-5794-4321-b647-fd9893eefe94" satisfied condition "success or failure"
Feb 15 00:33:56.167: INFO: Trying to get logs from node jerma-node pod var-expansion-250a71a9-5794-4321-b647-fd9893eefe94 container dapi-container: 
STEP: delete the pod
Feb 15 00:33:56.208: INFO: Waiting for pod var-expansion-250a71a9-5794-4321-b647-fd9893eefe94 to disappear
Feb 15 00:33:56.211: INFO: Pod var-expansion-250a71a9-5794-4321-b647-fd9893eefe94 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:33:56.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8703" for this suite.

• [SLOW TEST:8.214 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":280,"completed":107,"skipped":1731,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:33:56.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 00:33:56.318: INFO: Waiting up to 5m0s for pod "busybox-user-65534-93b10125-4272-4e6d-b371-48f0fbf862a0" in namespace "security-context-test-4074" to be "success or failure"
Feb 15 00:33:56.356: INFO: Pod "busybox-user-65534-93b10125-4272-4e6d-b371-48f0fbf862a0": Phase="Pending", Reason="", readiness=false. Elapsed: 37.29796ms
Feb 15 00:33:58.366: INFO: Pod "busybox-user-65534-93b10125-4272-4e6d-b371-48f0fbf862a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047061191s
Feb 15 00:34:00.371: INFO: Pod "busybox-user-65534-93b10125-4272-4e6d-b371-48f0fbf862a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052589112s
Feb 15 00:34:02.378: INFO: Pod "busybox-user-65534-93b10125-4272-4e6d-b371-48f0fbf862a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058926559s
Feb 15 00:34:04.420: INFO: Pod "busybox-user-65534-93b10125-4272-4e6d-b371-48f0fbf862a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101634513s
Feb 15 00:34:04.420: INFO: Pod "busybox-user-65534-93b10125-4272-4e6d-b371-48f0fbf862a0" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:34:04.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4074" for this suite.

• [SLOW TEST:8.210 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":108,"skipped":1742,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:34:04.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-65d19936-a2e4-4ef1-ae92-f3a8c9a6b53d
STEP: Creating a pod to test consume secrets
Feb 15 00:34:04.834: INFO: Waiting up to 5m0s for pod "pod-secrets-489307f8-b599-4072-9ed7-0d863cdf41fd" in namespace "secrets-6667" to be "success or failure"
Feb 15 00:34:04.915: INFO: Pod "pod-secrets-489307f8-b599-4072-9ed7-0d863cdf41fd": Phase="Pending", Reason="", readiness=false. Elapsed: 80.84757ms
Feb 15 00:34:06.941: INFO: Pod "pod-secrets-489307f8-b599-4072-9ed7-0d863cdf41fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106826478s
Feb 15 00:34:08.952: INFO: Pod "pod-secrets-489307f8-b599-4072-9ed7-0d863cdf41fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118207668s
Feb 15 00:34:10.962: INFO: Pod "pod-secrets-489307f8-b599-4072-9ed7-0d863cdf41fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127883403s
Feb 15 00:34:12.968: INFO: Pod "pod-secrets-489307f8-b599-4072-9ed7-0d863cdf41fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.134184196s
STEP: Saw pod success
Feb 15 00:34:12.968: INFO: Pod "pod-secrets-489307f8-b599-4072-9ed7-0d863cdf41fd" satisfied condition "success or failure"
Feb 15 00:34:12.976: INFO: Trying to get logs from node jerma-node pod pod-secrets-489307f8-b599-4072-9ed7-0d863cdf41fd container secret-volume-test: 
STEP: delete the pod
Feb 15 00:34:13.033: INFO: Waiting for pod pod-secrets-489307f8-b599-4072-9ed7-0d863cdf41fd to disappear
Feb 15 00:34:13.045: INFO: Pod pod-secrets-489307f8-b599-4072-9ed7-0d863cdf41fd no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:34:13.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6667" for this suite.
STEP: Destroying namespace "secret-namespace-4761" for this suite.

• [SLOW TEST:8.643 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":280,"completed":109,"skipped":1759,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:34:13.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1598
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 15 00:34:13.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1810'
Feb 15 00:34:16.095: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 15 00:34:16.095: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1604
Feb 15 00:34:18.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-1810'
Feb 15 00:34:18.324: INFO: stderr: ""
Feb 15 00:34:18.324: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:34:18.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1810" for this suite.

• [SLOW TEST:5.320 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1592
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":280,"completed":110,"skipped":1824,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:34:18.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name cm-test-opt-del-d1a8e494-427a-41cc-a646-47e0c102ceb8
STEP: Creating configMap with name cm-test-opt-upd-bf5e9d8f-effd-44f9-bbf7-3915d9ca6d73
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-d1a8e494-427a-41cc-a646-47e0c102ceb8
STEP: Updating configmap cm-test-opt-upd-bf5e9d8f-effd-44f9-bbf7-3915d9ca6d73
STEP: Creating configMap with name cm-test-opt-create-ff795be2-3dff-4094-899e-cc391ec05a1a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:35:40.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6935" for this suite.

• [SLOW TEST:81.629 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":111,"skipped":1880,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:35:40.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-071b8b67-b005-4c00-8f4b-e63ee7c74858
STEP: Creating a pod to test consume configMaps
Feb 15 00:35:40.180: INFO: Waiting up to 5m0s for pod "pod-configmaps-4a80d903-dc81-4552-89e9-118b17da4f83" in namespace "configmap-1778" to be "success or failure"
Feb 15 00:35:40.190: INFO: Pod "pod-configmaps-4a80d903-dc81-4552-89e9-118b17da4f83": Phase="Pending", Reason="", readiness=false. Elapsed: 9.595825ms
Feb 15 00:35:42.194: INFO: Pod "pod-configmaps-4a80d903-dc81-4552-89e9-118b17da4f83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0144716s
Feb 15 00:35:44.203: INFO: Pod "pod-configmaps-4a80d903-dc81-4552-89e9-118b17da4f83": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023249487s
Feb 15 00:35:46.209: INFO: Pod "pod-configmaps-4a80d903-dc81-4552-89e9-118b17da4f83": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028536184s
Feb 15 00:35:48.217: INFO: Pod "pod-configmaps-4a80d903-dc81-4552-89e9-118b17da4f83": Phase="Pending", Reason="", readiness=false. Elapsed: 8.037411639s
Feb 15 00:35:50.224: INFO: Pod "pod-configmaps-4a80d903-dc81-4552-89e9-118b17da4f83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.044355355s
STEP: Saw pod success
Feb 15 00:35:50.224: INFO: Pod "pod-configmaps-4a80d903-dc81-4552-89e9-118b17da4f83" satisfied condition "success or failure"
Feb 15 00:35:50.229: INFO: Trying to get logs from node jerma-node pod pod-configmaps-4a80d903-dc81-4552-89e9-118b17da4f83 container configmap-volume-test: 
STEP: delete the pod
Feb 15 00:35:50.640: INFO: Waiting for pod pod-configmaps-4a80d903-dc81-4552-89e9-118b17da4f83 to disappear
Feb 15 00:35:50.652: INFO: Pod pod-configmaps-4a80d903-dc81-4552-89e9-118b17da4f83 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:35:50.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1778" for this suite.

• [SLOW TEST:10.641 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":112,"skipped":1886,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:35:50.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 15 00:35:50.888: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2735 /api/v1/namespaces/watch-2735/configmaps/e2e-watch-test-resource-version 8570bd17-0d2e-4c7d-9bd0-17fc88e62d7d 8484745 0 2020-02-15 00:35:50 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 15 00:35:50.889: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2735 /api/v1/namespaces/watch-2735/configmaps/e2e-watch-test-resource-version 8570bd17-0d2e-4c7d-9bd0-17fc88e62d7d 8484746 0 2020-02-15 00:35:50 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:35:50.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2735" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":280,"completed":113,"skipped":1904,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:35:50.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2267.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2267.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2267.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2267.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2267.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2267.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 15 00:36:03.440: INFO: DNS probes using dns-2267/dns-test-cf8b9ff6-b75b-4148-b385-8b2257de79b9 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:36:03.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2267" for this suite.

• [SLOW TEST:12.871 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":280,"completed":114,"skipped":1917,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:36:03.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 15 00:36:04.215: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82d93fac-fa5a-446c-a433-9d9ddc1084ce" in namespace "downward-api-6491" to be "success or failure"
Feb 15 00:36:04.224: INFO: Pod "downwardapi-volume-82d93fac-fa5a-446c-a433-9d9ddc1084ce": Phase="Pending", Reason="", readiness=false. Elapsed: 8.428704ms
Feb 15 00:36:06.239: INFO: Pod "downwardapi-volume-82d93fac-fa5a-446c-a433-9d9ddc1084ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023759362s
Feb 15 00:36:08.249: INFO: Pod "downwardapi-volume-82d93fac-fa5a-446c-a433-9d9ddc1084ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033203852s
Feb 15 00:36:10.256: INFO: Pod "downwardapi-volume-82d93fac-fa5a-446c-a433-9d9ddc1084ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040483356s
Feb 15 00:36:12.265: INFO: Pod "downwardapi-volume-82d93fac-fa5a-446c-a433-9d9ddc1084ce": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049707764s
Feb 15 00:36:14.271: INFO: Pod "downwardapi-volume-82d93fac-fa5a-446c-a433-9d9ddc1084ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055304521s
STEP: Saw pod success
Feb 15 00:36:14.271: INFO: Pod "downwardapi-volume-82d93fac-fa5a-446c-a433-9d9ddc1084ce" satisfied condition "success or failure"
Feb 15 00:36:14.274: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-82d93fac-fa5a-446c-a433-9d9ddc1084ce container client-container: 
STEP: delete the pod
Feb 15 00:36:14.307: INFO: Waiting for pod downwardapi-volume-82d93fac-fa5a-446c-a433-9d9ddc1084ce to disappear
Feb 15 00:36:14.348: INFO: Pod downwardapi-volume-82d93fac-fa5a-446c-a433-9d9ddc1084ce no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:36:14.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6491" for this suite.

• [SLOW TEST:10.583 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":115,"skipped":1936,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:36:14.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 00:36:22.701: INFO: Waiting up to 5m0s for pod "client-envvars-4ec0d225-1253-459a-92ad-7d51bc96e2f5" in namespace "pods-1886" to be "success or failure"
Feb 15 00:36:22.734: INFO: Pod "client-envvars-4ec0d225-1253-459a-92ad-7d51bc96e2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.598494ms
Feb 15 00:36:24.741: INFO: Pod "client-envvars-4ec0d225-1253-459a-92ad-7d51bc96e2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039319191s
Feb 15 00:36:26.748: INFO: Pod "client-envvars-4ec0d225-1253-459a-92ad-7d51bc96e2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046015158s
Feb 15 00:36:28.830: INFO: Pod "client-envvars-4ec0d225-1253-459a-92ad-7d51bc96e2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128515846s
Feb 15 00:36:30.839: INFO: Pod "client-envvars-4ec0d225-1253-459a-92ad-7d51bc96e2f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.137194975s
STEP: Saw pod success
Feb 15 00:36:30.839: INFO: Pod "client-envvars-4ec0d225-1253-459a-92ad-7d51bc96e2f5" satisfied condition "success or failure"
Feb 15 00:36:30.843: INFO: Trying to get logs from node jerma-node pod client-envvars-4ec0d225-1253-459a-92ad-7d51bc96e2f5 container env3cont: 
STEP: delete the pod
Feb 15 00:36:30.913: INFO: Waiting for pod client-envvars-4ec0d225-1253-459a-92ad-7d51bc96e2f5 to disappear
Feb 15 00:36:30.944: INFO: Pod client-envvars-4ec0d225-1253-459a-92ad-7d51bc96e2f5 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:36:30.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1886" for this suite.

• [SLOW TEST:16.536 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":280,"completed":116,"skipped":1972,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:36:30.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 00:36:31.031: INFO: The status of Pod test-webserver-9cd758c6-856f-47fc-973e-18ff2e70c62f is Pending, waiting for it to be Running (with Ready = true)
Feb 15 00:36:33.037: INFO: The status of Pod test-webserver-9cd758c6-856f-47fc-973e-18ff2e70c62f is Pending, waiting for it to be Running (with Ready = true)
Feb 15 00:36:35.038: INFO: The status of Pod test-webserver-9cd758c6-856f-47fc-973e-18ff2e70c62f is Pending, waiting for it to be Running (with Ready = true)
Feb 15 00:36:37.035: INFO: The status of Pod test-webserver-9cd758c6-856f-47fc-973e-18ff2e70c62f is Pending, waiting for it to be Running (with Ready = true)
Feb 15 00:36:39.037: INFO: The status of Pod test-webserver-9cd758c6-856f-47fc-973e-18ff2e70c62f is Running (Ready = false)
Feb 15 00:36:41.041: INFO: The status of Pod test-webserver-9cd758c6-856f-47fc-973e-18ff2e70c62f is Running (Ready = false)
Feb 15 00:36:43.041: INFO: The status of Pod test-webserver-9cd758c6-856f-47fc-973e-18ff2e70c62f is Running (Ready = false)
Feb 15 00:36:45.037: INFO: The status of Pod test-webserver-9cd758c6-856f-47fc-973e-18ff2e70c62f is Running (Ready = false)
Feb 15 00:36:47.078: INFO: The status of Pod test-webserver-9cd758c6-856f-47fc-973e-18ff2e70c62f is Running (Ready = false)
Feb 15 00:36:49.035: INFO: The status of Pod test-webserver-9cd758c6-856f-47fc-973e-18ff2e70c62f is Running (Ready = false)
Feb 15 00:36:51.037: INFO: The status of Pod test-webserver-9cd758c6-856f-47fc-973e-18ff2e70c62f is Running (Ready = false)
Feb 15 00:36:53.038: INFO: The status of Pod test-webserver-9cd758c6-856f-47fc-973e-18ff2e70c62f is Running (Ready = false)
Feb 15 00:36:55.044: INFO: The status of Pod test-webserver-9cd758c6-856f-47fc-973e-18ff2e70c62f is Running (Ready = false)
Feb 15 00:36:57.040: INFO: The status of Pod test-webserver-9cd758c6-856f-47fc-973e-18ff2e70c62f is Running (Ready = true)
Feb 15 00:36:57.045: INFO: Container started at 2020-02-15 00:36:36 +0000 UTC, pod became ready at 2020-02-15 00:36:55 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:36:57.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2057" for this suite.

• [SLOW TEST:26.136 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":280,"completed":117,"skipped":1986,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:36:57.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 15 00:36:57.875: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 15 00:36:59.894: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323817, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323817, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323817, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323817, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:37:01.901: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323817, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323817, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323817, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323817, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:37:03.903: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323817, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323817, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323817, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323817, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 15 00:37:06.947: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:37:07.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3927" for this suite.
STEP: Destroying namespace "webhook-3927-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.151 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":280,"completed":118,"skipped":1994,"failed":0}
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:37:07.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test substitution in container's command
Feb 15 00:37:07.481: INFO: Waiting up to 5m0s for pod "var-expansion-dabb2b68-2f78-47de-9e25-b6de2094c38b" in namespace "var-expansion-165" to be "success or failure"
Feb 15 00:37:07.589: INFO: Pod "var-expansion-dabb2b68-2f78-47de-9e25-b6de2094c38b": Phase="Pending", Reason="", readiness=false. Elapsed: 108.596388ms
Feb 15 00:37:09.597: INFO: Pod "var-expansion-dabb2b68-2f78-47de-9e25-b6de2094c38b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116239899s
Feb 15 00:37:11.604: INFO: Pod "var-expansion-dabb2b68-2f78-47de-9e25-b6de2094c38b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123647748s
Feb 15 00:37:13.622: INFO: Pod "var-expansion-dabb2b68-2f78-47de-9e25-b6de2094c38b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141056574s
Feb 15 00:37:15.629: INFO: Pod "var-expansion-dabb2b68-2f78-47de-9e25-b6de2094c38b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.148470482s
STEP: Saw pod success
Feb 15 00:37:15.629: INFO: Pod "var-expansion-dabb2b68-2f78-47de-9e25-b6de2094c38b" satisfied condition "success or failure"
Feb 15 00:37:15.634: INFO: Trying to get logs from node jerma-node pod var-expansion-dabb2b68-2f78-47de-9e25-b6de2094c38b container dapi-container: 
STEP: delete the pod
Feb 15 00:37:15.759: INFO: Waiting for pod var-expansion-dabb2b68-2f78-47de-9e25-b6de2094c38b to disappear
Feb 15 00:37:15.789: INFO: Pod var-expansion-dabb2b68-2f78-47de-9e25-b6de2094c38b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:37:15.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-165" for this suite.

• [SLOW TEST:8.585 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":280,"completed":119,"skipped":2000,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:37:15.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test hostPath mode
Feb 15 00:37:15.973: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5445" to be "success or failure"
Feb 15 00:37:16.094: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 120.50259ms
Feb 15 00:37:18.110: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136223931s
Feb 15 00:37:20.117: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143182876s
Feb 15 00:37:22.429: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.455566228s
Feb 15 00:37:24.435: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.461573029s
Feb 15 00:37:26.446: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.472536238s
Feb 15 00:37:28.455: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.481870777s
STEP: Saw pod success
Feb 15 00:37:28.456: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 15 00:37:28.458: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 15 00:37:28.529: INFO: Waiting for pod pod-host-path-test to disappear
Feb 15 00:37:28.627: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:37:28.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-5445" for this suite.

• [SLOW TEST:12.814 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":120,"skipped":2014,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:37:28.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 15 00:37:29.818: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 15 00:37:31.848: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323849, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323849, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323849, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323849, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:37:33.901: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323849, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323849, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323849, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323849, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:37:35.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323849, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323849, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323849, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323849, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 15 00:37:38.913: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:37:39.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-606" for this suite.
STEP: Destroying namespace "webhook-606-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.758 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":280,"completed":121,"skipped":2024,"failed":0}
S
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:37:39.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 15 00:37:48.101: INFO: Successfully updated pod "pod-update-3f6608c8-0ca4-496a-a84d-94731892d27c"
STEP: verifying the updated pod is in kubernetes
Feb 15 00:37:48.115: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:37:48.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2992" for this suite.

• [SLOW TEST:8.724 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":280,"completed":122,"skipped":2025,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:37:48.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Feb 15 00:37:56.807: INFO: Successfully updated pod "annotationupdatebe129480-57dc-447a-af7f-c80c9fd27641"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:37:58.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1226" for this suite.

• [SLOW TEST:10.759 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":123,"skipped":2029,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:37:58.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-7ca7d71e-514f-41e6-86c4-dc958ec3f82a
STEP: Creating a pod to test consume configMaps
Feb 15 00:37:58.975: INFO: Waiting up to 5m0s for pod "pod-configmaps-a17c5d79-aaad-431c-af26-67782e0e2261" in namespace "configmap-2285" to be "success or failure"
Feb 15 00:37:58.994: INFO: Pod "pod-configmaps-a17c5d79-aaad-431c-af26-67782e0e2261": Phase="Pending", Reason="", readiness=false. Elapsed: 19.501537ms
Feb 15 00:38:01.001: INFO: Pod "pod-configmaps-a17c5d79-aaad-431c-af26-67782e0e2261": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026402485s
Feb 15 00:38:03.009: INFO: Pod "pod-configmaps-a17c5d79-aaad-431c-af26-67782e0e2261": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03427784s
Feb 15 00:38:05.015: INFO: Pod "pod-configmaps-a17c5d79-aaad-431c-af26-67782e0e2261": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039961574s
Feb 15 00:38:07.027: INFO: Pod "pod-configmaps-a17c5d79-aaad-431c-af26-67782e0e2261": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051757333s
STEP: Saw pod success
Feb 15 00:38:07.027: INFO: Pod "pod-configmaps-a17c5d79-aaad-431c-af26-67782e0e2261" satisfied condition "success or failure"
Feb 15 00:38:07.034: INFO: Trying to get logs from node jerma-node pod pod-configmaps-a17c5d79-aaad-431c-af26-67782e0e2261 container configmap-volume-test: 
STEP: delete the pod
Feb 15 00:38:07.083: INFO: Waiting for pod pod-configmaps-a17c5d79-aaad-431c-af26-67782e0e2261 to disappear
Feb 15 00:38:07.167: INFO: Pod pod-configmaps-a17c5d79-aaad-431c-af26-67782e0e2261 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:38:07.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2285" for this suite.

• [SLOW TEST:8.292 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":124,"skipped":2066,"failed":0}
S
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:38:07.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Feb 15 00:38:07.356: INFO: Waiting up to 5m0s for pod "downward-api-adad1b3e-4ec7-46fe-b449-f927b54cdc48" in namespace "downward-api-6638" to be "success or failure"
Feb 15 00:38:07.366: INFO: Pod "downward-api-adad1b3e-4ec7-46fe-b449-f927b54cdc48": Phase="Pending", Reason="", readiness=false. Elapsed: 9.601326ms
Feb 15 00:38:09.374: INFO: Pod "downward-api-adad1b3e-4ec7-46fe-b449-f927b54cdc48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017752163s
Feb 15 00:38:11.380: INFO: Pod "downward-api-adad1b3e-4ec7-46fe-b449-f927b54cdc48": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023309878s
Feb 15 00:38:13.388: INFO: Pod "downward-api-adad1b3e-4ec7-46fe-b449-f927b54cdc48": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031113139s
Feb 15 00:38:15.401: INFO: Pod "downward-api-adad1b3e-4ec7-46fe-b449-f927b54cdc48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044657445s
STEP: Saw pod success
Feb 15 00:38:15.402: INFO: Pod "downward-api-adad1b3e-4ec7-46fe-b449-f927b54cdc48" satisfied condition "success or failure"
Feb 15 00:38:15.406: INFO: Trying to get logs from node jerma-node pod downward-api-adad1b3e-4ec7-46fe-b449-f927b54cdc48 container dapi-container: 
STEP: delete the pod
Feb 15 00:38:15.490: INFO: Waiting for pod downward-api-adad1b3e-4ec7-46fe-b449-f927b54cdc48 to disappear
Feb 15 00:38:15.527: INFO: Pod downward-api-adad1b3e-4ec7-46fe-b449-f927b54cdc48 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:38:15.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6638" for this suite.

• [SLOW TEST:8.358 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":280,"completed":125,"skipped":2067,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:38:15.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-ca01c87d-9e21-4150-8310-f8dfb99cbd42
STEP: Creating a pod to test consume secrets
Feb 15 00:38:15.670: INFO: Waiting up to 5m0s for pod "pod-secrets-f48208ca-bc4f-412c-b5ea-8f3f75592250" in namespace "secrets-9818" to be "success or failure"
Feb 15 00:38:15.683: INFO: Pod "pod-secrets-f48208ca-bc4f-412c-b5ea-8f3f75592250": Phase="Pending", Reason="", readiness=false. Elapsed: 13.009388ms
Feb 15 00:38:17.692: INFO: Pod "pod-secrets-f48208ca-bc4f-412c-b5ea-8f3f75592250": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021751133s
Feb 15 00:38:19.703: INFO: Pod "pod-secrets-f48208ca-bc4f-412c-b5ea-8f3f75592250": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032216798s
Feb 15 00:38:21.716: INFO: Pod "pod-secrets-f48208ca-bc4f-412c-b5ea-8f3f75592250": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045643559s
Feb 15 00:38:23.725: INFO: Pod "pod-secrets-f48208ca-bc4f-412c-b5ea-8f3f75592250": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054499508s
STEP: Saw pod success
Feb 15 00:38:23.725: INFO: Pod "pod-secrets-f48208ca-bc4f-412c-b5ea-8f3f75592250" satisfied condition "success or failure"
Feb 15 00:38:23.729: INFO: Trying to get logs from node jerma-node pod pod-secrets-f48208ca-bc4f-412c-b5ea-8f3f75592250 container secret-volume-test: 
STEP: delete the pod
Feb 15 00:38:23.787: INFO: Waiting for pod pod-secrets-f48208ca-bc4f-412c-b5ea-8f3f75592250 to disappear
Feb 15 00:38:23.800: INFO: Pod pod-secrets-f48208ca-bc4f-412c-b5ea-8f3f75592250 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:38:23.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9818" for this suite.

• [SLOW TEST:8.295 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":126,"skipped":2122,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:38:23.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-e869737d-eb74-495e-b1e1-b0f6fc4fc793
STEP: Creating a pod to test consume secrets
Feb 15 00:38:24.080: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e660a651-bea9-4d04-909c-8b9e5f30fb00" in namespace "projected-7238" to be "success or failure"
Feb 15 00:38:24.110: INFO: Pod "pod-projected-secrets-e660a651-bea9-4d04-909c-8b9e5f30fb00": Phase="Pending", Reason="", readiness=false. Elapsed: 29.771274ms
Feb 15 00:38:26.127: INFO: Pod "pod-projected-secrets-e660a651-bea9-4d04-909c-8b9e5f30fb00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046887023s
Feb 15 00:38:28.139: INFO: Pod "pod-projected-secrets-e660a651-bea9-4d04-909c-8b9e5f30fb00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058383629s
Feb 15 00:38:30.147: INFO: Pod "pod-projected-secrets-e660a651-bea9-4d04-909c-8b9e5f30fb00": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066554023s
Feb 15 00:38:32.159: INFO: Pod "pod-projected-secrets-e660a651-bea9-4d04-909c-8b9e5f30fb00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078476391s
STEP: Saw pod success
Feb 15 00:38:32.159: INFO: Pod "pod-projected-secrets-e660a651-bea9-4d04-909c-8b9e5f30fb00" satisfied condition "success or failure"
Feb 15 00:38:32.164: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-e660a651-bea9-4d04-909c-8b9e5f30fb00 container projected-secret-volume-test: 
STEP: delete the pod
Feb 15 00:38:32.238: INFO: Waiting for pod pod-projected-secrets-e660a651-bea9-4d04-909c-8b9e5f30fb00 to disappear
Feb 15 00:38:32.251: INFO: Pod pod-projected-secrets-e660a651-bea9-4d04-909c-8b9e5f30fb00 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:38:32.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7238" for this suite.

• [SLOW TEST:8.450 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":127,"skipped":2140,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:38:32.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb 15 00:38:32.358: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:38:46.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8605" for this suite.

• [SLOW TEST:14.216 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":280,"completed":128,"skipped":2154,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:38:46.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 15 00:38:47.308: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 15 00:38:49.330: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323927, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323927, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323927, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323927, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:38:51.340: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323927, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323927, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323927, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323927, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:38:53.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323927, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323927, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323927, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323927, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:38:55.935: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323927, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323927, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323927, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717323927, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 15 00:38:58.378: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:38:58.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9691" for this suite.
STEP: Destroying namespace "webhook-9691-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.432 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":280,"completed":129,"skipped":2154,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:38:58.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:39:59.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1337" for this suite.

• [SLOW TEST:60.086 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":280,"completed":130,"skipped":2172,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:39:59.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 15 00:39:59.103: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04f96a58-8db7-418f-be9b-ac78deb2e353" in namespace "downward-api-8181" to be "success or failure"
Feb 15 00:39:59.115: INFO: Pod "downwardapi-volume-04f96a58-8db7-418f-be9b-ac78deb2e353": Phase="Pending", Reason="", readiness=false. Elapsed: 12.064686ms
Feb 15 00:40:01.123: INFO: Pod "downwardapi-volume-04f96a58-8db7-418f-be9b-ac78deb2e353": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019485793s
Feb 15 00:40:03.131: INFO: Pod "downwardapi-volume-04f96a58-8db7-418f-be9b-ac78deb2e353": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027572752s
Feb 15 00:40:05.137: INFO: Pod "downwardapi-volume-04f96a58-8db7-418f-be9b-ac78deb2e353": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033297707s
Feb 15 00:40:07.145: INFO: Pod "downwardapi-volume-04f96a58-8db7-418f-be9b-ac78deb2e353": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041263044s
STEP: Saw pod success
Feb 15 00:40:07.145: INFO: Pod "downwardapi-volume-04f96a58-8db7-418f-be9b-ac78deb2e353" satisfied condition "success or failure"
Feb 15 00:40:07.149: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-04f96a58-8db7-418f-be9b-ac78deb2e353 container client-container: 
STEP: delete the pod
Feb 15 00:40:08.566: INFO: Waiting for pod downwardapi-volume-04f96a58-8db7-418f-be9b-ac78deb2e353 to disappear
Feb 15 00:40:08.667: INFO: Pod downwardapi-volume-04f96a58-8db7-418f-be9b-ac78deb2e353 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:40:08.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8181" for this suite.

• [SLOW TEST:9.663 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":131,"skipped":2176,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:40:08.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:40:13.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2925" for this suite.

• [SLOW TEST:5.042 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":280,"completed":132,"skipped":2204,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:40:13.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the initial replication controller
Feb 15 00:40:13.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3605'
Feb 15 00:40:14.485: INFO: stderr: ""
Feb 15 00:40:14.486: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 15 00:40:14.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3605'
Feb 15 00:40:14.750: INFO: stderr: ""
Feb 15 00:40:14.750: INFO: stdout: "update-demo-nautilus-4ghg6 update-demo-nautilus-lm4x5 "
Feb 15 00:40:14.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4ghg6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3605'
Feb 15 00:40:14.869: INFO: stderr: ""
Feb 15 00:40:14.869: INFO: stdout: ""
Feb 15 00:40:14.869: INFO: update-demo-nautilus-4ghg6 is created but not running
Feb 15 00:40:19.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3605'
Feb 15 00:40:20.385: INFO: stderr: ""
Feb 15 00:40:20.385: INFO: stdout: "update-demo-nautilus-4ghg6 update-demo-nautilus-lm4x5 "
Feb 15 00:40:20.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4ghg6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3605'
Feb 15 00:40:21.293: INFO: stderr: ""
Feb 15 00:40:21.294: INFO: stdout: ""
Feb 15 00:40:21.294: INFO: update-demo-nautilus-4ghg6 is created but not running
Feb 15 00:40:26.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3605'
Feb 15 00:40:26.492: INFO: stderr: ""
Feb 15 00:40:26.492: INFO: stdout: "update-demo-nautilus-4ghg6 update-demo-nautilus-lm4x5 "
Feb 15 00:40:26.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4ghg6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3605'
Feb 15 00:40:26.646: INFO: stderr: ""
Feb 15 00:40:26.647: INFO: stdout: ""
Feb 15 00:40:26.647: INFO: update-demo-nautilus-4ghg6 is created but not running
Feb 15 00:40:31.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3605'
Feb 15 00:40:31.884: INFO: stderr: ""
Feb 15 00:40:31.884: INFO: stdout: "update-demo-nautilus-4ghg6 update-demo-nautilus-lm4x5 "
Feb 15 00:40:31.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4ghg6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3605'
Feb 15 00:40:31.988: INFO: stderr: ""
Feb 15 00:40:31.988: INFO: stdout: "true"
Feb 15 00:40:31.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4ghg6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3605'
Feb 15 00:40:32.161: INFO: stderr: ""
Feb 15 00:40:32.161: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 15 00:40:32.162: INFO: validating pod update-demo-nautilus-4ghg6
Feb 15 00:40:32.179: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 15 00:40:32.180: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 15 00:40:32.180: INFO: update-demo-nautilus-4ghg6 is verified up and running
Feb 15 00:40:32.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lm4x5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3605'
Feb 15 00:40:32.330: INFO: stderr: ""
Feb 15 00:40:32.330: INFO: stdout: "true"
Feb 15 00:40:32.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lm4x5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3605'
Feb 15 00:40:32.520: INFO: stderr: ""
Feb 15 00:40:32.520: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 15 00:40:32.520: INFO: validating pod update-demo-nautilus-lm4x5
Feb 15 00:40:32.537: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 15 00:40:32.537: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 15 00:40:32.537: INFO: update-demo-nautilus-lm4x5 is verified up and running
STEP: rolling-update to new replication controller
Feb 15 00:40:32.542: INFO: scanned /root for discovery docs: 
Feb 15 00:40:32.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3605'
Feb 15 00:41:03.911: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 15 00:41:03.911: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 15 00:41:03.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3605'
Feb 15 00:41:04.110: INFO: stderr: ""
Feb 15 00:41:04.110: INFO: stdout: "update-demo-kitten-89t72 update-demo-kitten-lw8ph "
Feb 15 00:41:04.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-89t72 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3605'
Feb 15 00:41:04.202: INFO: stderr: ""
Feb 15 00:41:04.202: INFO: stdout: "true"
Feb 15 00:41:04.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-89t72 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3605'
Feb 15 00:41:04.282: INFO: stderr: ""
Feb 15 00:41:04.282: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 15 00:41:04.282: INFO: validating pod update-demo-kitten-89t72
Feb 15 00:41:04.292: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 15 00:41:04.292: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 15 00:41:04.292: INFO: update-demo-kitten-89t72 is verified up and running
Feb 15 00:41:04.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lw8ph -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3605'
Feb 15 00:41:04.370: INFO: stderr: ""
Feb 15 00:41:04.370: INFO: stdout: "true"
Feb 15 00:41:04.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lw8ph -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3605'
Feb 15 00:41:04.451: INFO: stderr: ""
Feb 15 00:41:04.451: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 15 00:41:04.451: INFO: validating pod update-demo-kitten-lw8ph
Feb 15 00:41:04.459: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 15 00:41:04.459: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 15 00:41:04.459: INFO: update-demo-kitten-lw8ph is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:41:04.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3605" for this suite.

• [SLOW TEST:50.736 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":280,"completed":133,"skipped":2225,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:41:04.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod test-webserver-d0cfa7fb-682f-4609-8744-cec553893b02 in namespace container-probe-6368
Feb 15 00:41:11.729: INFO: Started pod test-webserver-d0cfa7fb-682f-4609-8744-cec553893b02 in namespace container-probe-6368
STEP: checking the pod's current state and verifying that restartCount is present
Feb 15 00:41:11.803: INFO: Initial restart count of pod test-webserver-d0cfa7fb-682f-4609-8744-cec553893b02 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:45:12.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6368" for this suite.

• [SLOW TEST:247.697 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":134,"skipped":2265,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:45:12.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 15 00:45:12.816: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 15 00:45:14.828: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324312, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324312, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324312, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324312, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:45:16.839: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324312, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324312, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324312, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324312, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:45:18.837: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324312, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324312, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324312, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324312, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:45:20.836: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324312, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324312, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324312, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324312, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 15 00:45:24.192: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 00:45:24.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5000-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:45:25.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9484" for this suite.
STEP: Destroying namespace "webhook-9484-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:13.324 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":280,"completed":135,"skipped":2294,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:45:25.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name secret-emptykey-test-7b7f3499-9f37-4738-a000-1d5dae73e744
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:45:25.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9428" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":280,"completed":136,"skipped":2324,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:45:25.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Feb 15 00:45:25.694: INFO: Waiting up to 5m0s for pod "downward-api-092349a9-4813-4baa-9933-326f51c16f8d" in namespace "downward-api-541" to be "success or failure"
Feb 15 00:45:25.782: INFO: Pod "downward-api-092349a9-4813-4baa-9933-326f51c16f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 87.728781ms
Feb 15 00:45:27.793: INFO: Pod "downward-api-092349a9-4813-4baa-9933-326f51c16f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09828452s
Feb 15 00:45:29.800: INFO: Pod "downward-api-092349a9-4813-4baa-9933-326f51c16f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105271102s
Feb 15 00:45:31.824: INFO: Pod "downward-api-092349a9-4813-4baa-9933-326f51c16f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129744481s
Feb 15 00:45:33.838: INFO: Pod "downward-api-092349a9-4813-4baa-9933-326f51c16f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.144130792s
Feb 15 00:45:35.869: INFO: Pod "downward-api-092349a9-4813-4baa-9933-326f51c16f8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.174871102s
STEP: Saw pod success
Feb 15 00:45:35.869: INFO: Pod "downward-api-092349a9-4813-4baa-9933-326f51c16f8d" satisfied condition "success or failure"
Feb 15 00:45:35.876: INFO: Trying to get logs from node jerma-node pod downward-api-092349a9-4813-4baa-9933-326f51c16f8d container dapi-container: 
STEP: delete the pod
Feb 15 00:45:36.019: INFO: Waiting for pod downward-api-092349a9-4813-4baa-9933-326f51c16f8d to disappear
Feb 15 00:45:36.026: INFO: Pod downward-api-092349a9-4813-4baa-9933-326f51c16f8d no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:45:36.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-541" for this suite.

• [SLOW TEST:10.427 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":280,"completed":137,"skipped":2354,"failed":0}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:45:36.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-projected-z22j
STEP: Creating a pod to test atomic-volume-subpath
Feb 15 00:45:36.158: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-z22j" in namespace "subpath-1448" to be "success or failure"
Feb 15 00:45:36.187: INFO: Pod "pod-subpath-test-projected-z22j": Phase="Pending", Reason="", readiness=false. Elapsed: 28.434324ms
Feb 15 00:45:38.196: INFO: Pod "pod-subpath-test-projected-z22j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037933168s
Feb 15 00:45:40.202: INFO: Pod "pod-subpath-test-projected-z22j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04332406s
Feb 15 00:45:42.214: INFO: Pod "pod-subpath-test-projected-z22j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055385983s
Feb 15 00:45:44.220: INFO: Pod "pod-subpath-test-projected-z22j": Phase="Running", Reason="", readiness=true. Elapsed: 8.061060701s
Feb 15 00:45:46.229: INFO: Pod "pod-subpath-test-projected-z22j": Phase="Running", Reason="", readiness=true. Elapsed: 10.070398083s
Feb 15 00:45:48.236: INFO: Pod "pod-subpath-test-projected-z22j": Phase="Running", Reason="", readiness=true. Elapsed: 12.077517108s
Feb 15 00:45:50.243: INFO: Pod "pod-subpath-test-projected-z22j": Phase="Running", Reason="", readiness=true. Elapsed: 14.084859182s
Feb 15 00:45:52.250: INFO: Pod "pod-subpath-test-projected-z22j": Phase="Running", Reason="", readiness=true. Elapsed: 16.091918043s
Feb 15 00:45:54.330: INFO: Pod "pod-subpath-test-projected-z22j": Phase="Running", Reason="", readiness=true. Elapsed: 18.171350922s
Feb 15 00:45:56.348: INFO: Pod "pod-subpath-test-projected-z22j": Phase="Running", Reason="", readiness=true. Elapsed: 20.189169699s
Feb 15 00:45:58.356: INFO: Pod "pod-subpath-test-projected-z22j": Phase="Running", Reason="", readiness=true. Elapsed: 22.197041704s
Feb 15 00:46:00.363: INFO: Pod "pod-subpath-test-projected-z22j": Phase="Running", Reason="", readiness=true. Elapsed: 24.204753755s
Feb 15 00:46:02.387: INFO: Pod "pod-subpath-test-projected-z22j": Phase="Running", Reason="", readiness=true. Elapsed: 26.228277959s
Feb 15 00:46:04.394: INFO: Pod "pod-subpath-test-projected-z22j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.235390714s
STEP: Saw pod success
Feb 15 00:46:04.394: INFO: Pod "pod-subpath-test-projected-z22j" satisfied condition "success or failure"
Feb 15 00:46:04.397: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-z22j container test-container-subpath-projected-z22j: 
STEP: delete the pod
Feb 15 00:46:04.510: INFO: Waiting for pod pod-subpath-test-projected-z22j to disappear
Feb 15 00:46:04.573: INFO: Pod pod-subpath-test-projected-z22j no longer exists
STEP: Deleting pod pod-subpath-test-projected-z22j
Feb 15 00:46:04.573: INFO: Deleting pod "pod-subpath-test-projected-z22j" in namespace "subpath-1448"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:46:04.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1448" for this suite.

• [SLOW TEST:28.607 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":280,"completed":138,"skipped":2360,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:46:04.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:46:16.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-835" for this suite.

• [SLOW TEST:11.393 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":280,"completed":139,"skipped":2361,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:46:16.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 15 00:46:16.146: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3e5a9d57-a9f7-4fb4-aec2-f3d4116f3634" in namespace "downward-api-2785" to be "success or failure"
Feb 15 00:46:16.168: INFO: Pod "downwardapi-volume-3e5a9d57-a9f7-4fb4-aec2-f3d4116f3634": Phase="Pending", Reason="", readiness=false. Elapsed: 21.649935ms
Feb 15 00:46:18.177: INFO: Pod "downwardapi-volume-3e5a9d57-a9f7-4fb4-aec2-f3d4116f3634": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030400779s
Feb 15 00:46:20.184: INFO: Pod "downwardapi-volume-3e5a9d57-a9f7-4fb4-aec2-f3d4116f3634": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03755249s
Feb 15 00:46:22.195: INFO: Pod "downwardapi-volume-3e5a9d57-a9f7-4fb4-aec2-f3d4116f3634": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048833984s
Feb 15 00:46:24.204: INFO: Pod "downwardapi-volume-3e5a9d57-a9f7-4fb4-aec2-f3d4116f3634": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057196872s
STEP: Saw pod success
Feb 15 00:46:24.204: INFO: Pod "downwardapi-volume-3e5a9d57-a9f7-4fb4-aec2-f3d4116f3634" satisfied condition "success or failure"
Feb 15 00:46:24.208: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-3e5a9d57-a9f7-4fb4-aec2-f3d4116f3634 container client-container: 
STEP: delete the pod
Feb 15 00:46:24.317: INFO: Waiting for pod downwardapi-volume-3e5a9d57-a9f7-4fb4-aec2-f3d4116f3634 to disappear
Feb 15 00:46:24.326: INFO: Pod downwardapi-volume-3e5a9d57-a9f7-4fb4-aec2-f3d4116f3634 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:46:24.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2785" for this suite.

• [SLOW TEST:8.318 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":140,"skipped":2396,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:46:24.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 15 00:46:24.488: INFO: Waiting up to 5m0s for pod "pod-d33a0006-e667-4b1a-9bb8-1d6eec9457f7" in namespace "emptydir-3646" to be "success or failure"
Feb 15 00:46:24.530: INFO: Pod "pod-d33a0006-e667-4b1a-9bb8-1d6eec9457f7": Phase="Pending", Reason="", readiness=false. Elapsed: 41.792698ms
Feb 15 00:46:26.541: INFO: Pod "pod-d33a0006-e667-4b1a-9bb8-1d6eec9457f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052971079s
Feb 15 00:46:28.551: INFO: Pod "pod-d33a0006-e667-4b1a-9bb8-1d6eec9457f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06283429s
Feb 15 00:46:30.840: INFO: Pod "pod-d33a0006-e667-4b1a-9bb8-1d6eec9457f7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.351601544s
Feb 15 00:46:32.849: INFO: Pod "pod-d33a0006-e667-4b1a-9bb8-1d6eec9457f7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.360298085s
Feb 15 00:46:34.861: INFO: Pod "pod-d33a0006-e667-4b1a-9bb8-1d6eec9457f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.372232911s
STEP: Saw pod success
Feb 15 00:46:34.861: INFO: Pod "pod-d33a0006-e667-4b1a-9bb8-1d6eec9457f7" satisfied condition "success or failure"
Feb 15 00:46:34.868: INFO: Trying to get logs from node jerma-node pod pod-d33a0006-e667-4b1a-9bb8-1d6eec9457f7 container test-container: 
STEP: delete the pod
Feb 15 00:46:34.920: INFO: Waiting for pod pod-d33a0006-e667-4b1a-9bb8-1d6eec9457f7 to disappear
Feb 15 00:46:34.928: INFO: Pod pod-d33a0006-e667-4b1a-9bb8-1d6eec9457f7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:46:34.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3646" for this suite.

• [SLOW TEST:10.597 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":141,"skipped":2425,"failed":0}
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:46:34.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 15 00:46:35.139: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1570 /api/v1/namespaces/watch-1570/configmaps/e2e-watch-test-watch-closed 023e8325-eae7-4202-80fc-d0e02d526cef 8487309 0 2020-02-15 00:46:35 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 15 00:46:35.139: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1570 /api/v1/namespaces/watch-1570/configmaps/e2e-watch-test-watch-closed 023e8325-eae7-4202-80fc-d0e02d526cef 8487310 0 2020-02-15 00:46:35 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 15 00:46:35.198: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1570 /api/v1/namespaces/watch-1570/configmaps/e2e-watch-test-watch-closed 023e8325-eae7-4202-80fc-d0e02d526cef 8487311 0 2020-02-15 00:46:35 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 15 00:46:35.199: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1570 /api/v1/namespaces/watch-1570/configmaps/e2e-watch-test-watch-closed 023e8325-eae7-4202-80fc-d0e02d526cef 8487312 0 2020-02-15 00:46:35 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:46:35.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1570" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":280,"completed":142,"skipped":2426,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:46:35.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 15 00:46:35.410: INFO: Waiting up to 5m0s for pod "pod-1b67adab-c158-47a8-9b56-9fbb8dcd41d5" in namespace "emptydir-5800" to be "success or failure"
Feb 15 00:46:35.440: INFO: Pod "pod-1b67adab-c158-47a8-9b56-9fbb8dcd41d5": Phase="Pending", Reason="", readiness=false. Elapsed: 29.954142ms
Feb 15 00:46:37.449: INFO: Pod "pod-1b67adab-c158-47a8-9b56-9fbb8dcd41d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039162271s
Feb 15 00:46:39.480: INFO: Pod "pod-1b67adab-c158-47a8-9b56-9fbb8dcd41d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070227635s
Feb 15 00:46:41.500: INFO: Pod "pod-1b67adab-c158-47a8-9b56-9fbb8dcd41d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.090249047s
STEP: Saw pod success
Feb 15 00:46:41.500: INFO: Pod "pod-1b67adab-c158-47a8-9b56-9fbb8dcd41d5" satisfied condition "success or failure"
Feb 15 00:46:41.506: INFO: Trying to get logs from node jerma-node pod pod-1b67adab-c158-47a8-9b56-9fbb8dcd41d5 container test-container: 
STEP: delete the pod
Feb 15 00:46:41.538: INFO: Waiting for pod pod-1b67adab-c158-47a8-9b56-9fbb8dcd41d5 to disappear
Feb 15 00:46:41.542: INFO: Pod pod-1b67adab-c158-47a8-9b56-9fbb8dcd41d5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:46:41.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5800" for this suite.

• [SLOW TEST:6.355 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":143,"skipped":2436,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:46:41.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-4eac97eb-f7ff-416a-b431-bf39228a62d7
STEP: Creating a pod to test consume secrets
Feb 15 00:46:41.722: INFO: Waiting up to 5m0s for pod "pod-secrets-35e18039-5401-4260-bb28-4f1836bdd1a4" in namespace "secrets-976" to be "success or failure"
Feb 15 00:46:41.786: INFO: Pod "pod-secrets-35e18039-5401-4260-bb28-4f1836bdd1a4": Phase="Pending", Reason="", readiness=false. Elapsed: 64.1552ms
Feb 15 00:46:43.808: INFO: Pod "pod-secrets-35e18039-5401-4260-bb28-4f1836bdd1a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086002535s
Feb 15 00:46:45.816: INFO: Pod "pod-secrets-35e18039-5401-4260-bb28-4f1836bdd1a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094105563s
Feb 15 00:46:47.826: INFO: Pod "pod-secrets-35e18039-5401-4260-bb28-4f1836bdd1a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10434681s
Feb 15 00:46:49.864: INFO: Pod "pod-secrets-35e18039-5401-4260-bb28-4f1836bdd1a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.142201471s
STEP: Saw pod success
Feb 15 00:46:49.864: INFO: Pod "pod-secrets-35e18039-5401-4260-bb28-4f1836bdd1a4" satisfied condition "success or failure"
Feb 15 00:46:49.869: INFO: Trying to get logs from node jerma-node pod pod-secrets-35e18039-5401-4260-bb28-4f1836bdd1a4 container secret-volume-test: 
STEP: delete the pod
Feb 15 00:46:49.934: INFO: Waiting for pod pod-secrets-35e18039-5401-4260-bb28-4f1836bdd1a4 to disappear
Feb 15 00:46:49.951: INFO: Pod pod-secrets-35e18039-5401-4260-bb28-4f1836bdd1a4 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:46:49.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-976" for this suite.

• [SLOW TEST:8.488 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":144,"skipped":2450,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:46:50.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 15 00:46:51.059: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 15 00:46:53.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324411, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324411, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324411, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324411, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:46:55.081: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324411, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324411, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324411, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324411, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:46:57.077: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324411, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324411, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324411, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324411, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 15 00:47:00.113: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:47:00.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-685" for this suite.
STEP: Destroying namespace "webhook-685-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.982 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":280,"completed":145,"skipped":2476,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:47:01.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: set up a multi version CRD
Feb 15 00:47:01.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:47:17.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9943" for this suite.

• [SLOW TEST:16.865 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":280,"completed":146,"skipped":2505,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:47:17.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-60480ade-fa89-44b1-8a67-37e13228b68d
STEP: Creating a pod to test consume configMaps
Feb 15 00:47:18.070: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8a824be8-2bc5-4be0-9a94-fcdc8ad5f26a" in namespace "projected-211" to be "success or failure"
Feb 15 00:47:18.089: INFO: Pod "pod-projected-configmaps-8a824be8-2bc5-4be0-9a94-fcdc8ad5f26a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.028676ms
Feb 15 00:47:20.095: INFO: Pod "pod-projected-configmaps-8a824be8-2bc5-4be0-9a94-fcdc8ad5f26a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025439678s
Feb 15 00:47:22.104: INFO: Pod "pod-projected-configmaps-8a824be8-2bc5-4be0-9a94-fcdc8ad5f26a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034064944s
Feb 15 00:47:24.117: INFO: Pod "pod-projected-configmaps-8a824be8-2bc5-4be0-9a94-fcdc8ad5f26a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047427346s
Feb 15 00:47:26.125: INFO: Pod "pod-projected-configmaps-8a824be8-2bc5-4be0-9a94-fcdc8ad5f26a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054775361s
STEP: Saw pod success
Feb 15 00:47:26.125: INFO: Pod "pod-projected-configmaps-8a824be8-2bc5-4be0-9a94-fcdc8ad5f26a" satisfied condition "success or failure"
Feb 15 00:47:26.129: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-8a824be8-2bc5-4be0-9a94-fcdc8ad5f26a container projected-configmap-volume-test: 
STEP: delete the pod
Feb 15 00:47:26.160: INFO: Waiting for pod pod-projected-configmaps-8a824be8-2bc5-4be0-9a94-fcdc8ad5f26a to disappear
Feb 15 00:47:26.214: INFO: Pod pod-projected-configmaps-8a824be8-2bc5-4be0-9a94-fcdc8ad5f26a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:47:26.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-211" for this suite.

• [SLOW TEST:8.326 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":147,"skipped":2512,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:47:26.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 15 00:47:26.303: INFO: Waiting up to 5m0s for pod "downwardapi-volume-587f752d-e48d-4365-89ec-af82b479eb21" in namespace "downward-api-4882" to be "success or failure"
Feb 15 00:47:26.310: INFO: Pod "downwardapi-volume-587f752d-e48d-4365-89ec-af82b479eb21": Phase="Pending", Reason="", readiness=false. Elapsed: 7.028513ms
Feb 15 00:47:28.328: INFO: Pod "downwardapi-volume-587f752d-e48d-4365-89ec-af82b479eb21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025115506s
Feb 15 00:47:30.337: INFO: Pod "downwardapi-volume-587f752d-e48d-4365-89ec-af82b479eb21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034447237s
Feb 15 00:47:32.348: INFO: Pod "downwardapi-volume-587f752d-e48d-4365-89ec-af82b479eb21": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044748349s
Feb 15 00:47:34.354: INFO: Pod "downwardapi-volume-587f752d-e48d-4365-89ec-af82b479eb21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051595609s
STEP: Saw pod success
Feb 15 00:47:34.355: INFO: Pod "downwardapi-volume-587f752d-e48d-4365-89ec-af82b479eb21" satisfied condition "success or failure"
Feb 15 00:47:34.358: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-587f752d-e48d-4365-89ec-af82b479eb21 container client-container: 
STEP: delete the pod
Feb 15 00:47:34.608: INFO: Waiting for pod downwardapi-volume-587f752d-e48d-4365-89ec-af82b479eb21 to disappear
Feb 15 00:47:34.743: INFO: Pod downwardapi-volume-587f752d-e48d-4365-89ec-af82b479eb21 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:47:34.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4882" for this suite.

• [SLOW TEST:8.527 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":148,"skipped":2527,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:47:34.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-projected-all-test-volume-2650cd03-ba84-4b8e-bbc0-d3fa46a4a817
STEP: Creating secret with name secret-projected-all-test-volume-4d470c50-2dce-4afc-a201-531546780304
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb 15 00:47:35.131: INFO: Waiting up to 5m0s for pod "projected-volume-d38d3d87-2dbc-42ed-8e68-70fd9fb9d705" in namespace "projected-4020" to be "success or failure"
Feb 15 00:47:35.146: INFO: Pod "projected-volume-d38d3d87-2dbc-42ed-8e68-70fd9fb9d705": Phase="Pending", Reason="", readiness=false. Elapsed: 14.285774ms
Feb 15 00:47:37.153: INFO: Pod "projected-volume-d38d3d87-2dbc-42ed-8e68-70fd9fb9d705": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021696686s
Feb 15 00:47:39.159: INFO: Pod "projected-volume-d38d3d87-2dbc-42ed-8e68-70fd9fb9d705": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027948918s
Feb 15 00:47:41.166: INFO: Pod "projected-volume-d38d3d87-2dbc-42ed-8e68-70fd9fb9d705": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03423208s
Feb 15 00:47:43.173: INFO: Pod "projected-volume-d38d3d87-2dbc-42ed-8e68-70fd9fb9d705": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041205856s
STEP: Saw pod success
Feb 15 00:47:43.173: INFO: Pod "projected-volume-d38d3d87-2dbc-42ed-8e68-70fd9fb9d705" satisfied condition "success or failure"
Feb 15 00:47:43.176: INFO: Trying to get logs from node jerma-node pod projected-volume-d38d3d87-2dbc-42ed-8e68-70fd9fb9d705 container projected-all-volume-test: 
STEP: delete the pod
Feb 15 00:47:43.217: INFO: Waiting for pod projected-volume-d38d3d87-2dbc-42ed-8e68-70fd9fb9d705 to disappear
Feb 15 00:47:43.301: INFO: Pod projected-volume-d38d3d87-2dbc-42ed-8e68-70fd9fb9d705 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:47:43.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4020" for this suite.

• [SLOW TEST:8.554 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":280,"completed":149,"skipped":2550,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:47:43.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 15 00:47:44.336: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 15 00:47:46.357: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324464, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324464, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324464, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324464, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:47:48.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324464, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324464, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324464, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324464, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:47:50.366: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324464, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324464, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324464, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324464, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 15 00:47:53.401: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:47:53.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-90" for this suite.
STEP: Destroying namespace "webhook-90-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.548 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":280,"completed":150,"skipped":2574,"failed":0}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:47:53.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0215 00:48:39.313744      10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 15 00:48:39.313: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:48:39.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7863" for this suite.

• [SLOW TEST:45.470 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":280,"completed":151,"skipped":2576,"failed":0}
SSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:48:39.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: getting the auto-created API token
Feb 15 00:48:40.572: INFO: created pod pod-service-account-defaultsa
Feb 15 00:48:40.572: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 15 00:48:40.896: INFO: created pod pod-service-account-mountsa
Feb 15 00:48:40.896: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 15 00:48:40.910: INFO: created pod pod-service-account-nomountsa
Feb 15 00:48:40.910: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 15 00:48:40.926: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 15 00:48:40.926: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 15 00:48:40.980: INFO: created pod pod-service-account-mountsa-mountspec
Feb 15 00:48:40.980: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 15 00:48:41.076: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 15 00:48:41.076: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 15 00:48:41.098: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 15 00:48:41.098: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 15 00:48:41.111: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 15 00:48:41.111: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 15 00:48:41.148: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 15 00:48:41.148: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:48:41.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6261" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":280,"completed":152,"skipped":2581,"failed":0}

------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:48:43.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:48:55.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-79" for this suite.

• [SLOW TEST:13.225 seconds]
[k8s.io] Lease
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":280,"completed":153,"skipped":2581,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:48:56.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 15 00:48:58.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4902'
Feb 15 00:49:02.281: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 15 00:49:02.281: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795
Feb 15 00:49:03.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-4902'
Feb 15 00:49:10.382: INFO: stderr: ""
Feb 15 00:49:10.383: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:49:10.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4902" for this suite.

• [SLOW TEST:16.588 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":280,"completed":154,"skipped":2583,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:49:12.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 15 00:49:24.907: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 15 00:49:28.085: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324563, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324563, loc:(*time.Location)(0x7e52ca0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:49:30.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324567, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324563, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:49:33.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324567, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324563, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:49:34.435: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324567, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324563, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:49:36.104: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324567, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324563, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:49:38.092: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324567, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324563, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:49:40.092: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324567, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324563, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:49:42.096: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324567, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324563, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:49:44.093: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324567, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324563, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:49:46.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324567, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324563, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:49:48.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324567, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324563, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:49:50.096: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324564, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324567, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324563, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 15 00:49:53.133: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:50:05.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2073" for this suite.
STEP: Destroying namespace "webhook-2073-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:53.200 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":280,"completed":155,"skipped":2583,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:50:06.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 15 00:50:06.227: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4a4b8b1-30bb-4239-ad69-f4ca26a31342" in namespace "downward-api-4671" to be "success or failure"
Feb 15 00:50:06.273: INFO: Pod "downwardapi-volume-f4a4b8b1-30bb-4239-ad69-f4ca26a31342": Phase="Pending", Reason="", readiness=false. Elapsed: 45.861317ms
Feb 15 00:50:08.280: INFO: Pod "downwardapi-volume-f4a4b8b1-30bb-4239-ad69-f4ca26a31342": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053095641s
Feb 15 00:50:10.287: INFO: Pod "downwardapi-volume-f4a4b8b1-30bb-4239-ad69-f4ca26a31342": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059386913s
Feb 15 00:50:15.182: INFO: Pod "downwardapi-volume-f4a4b8b1-30bb-4239-ad69-f4ca26a31342": Phase="Pending", Reason="", readiness=false. Elapsed: 8.954570643s
Feb 15 00:50:17.189: INFO: Pod "downwardapi-volume-f4a4b8b1-30bb-4239-ad69-f4ca26a31342": Phase="Pending", Reason="", readiness=false. Elapsed: 10.962173919s
Feb 15 00:50:19.194: INFO: Pod "downwardapi-volume-f4a4b8b1-30bb-4239-ad69-f4ca26a31342": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.967091683s
STEP: Saw pod success
Feb 15 00:50:19.195: INFO: Pod "downwardapi-volume-f4a4b8b1-30bb-4239-ad69-f4ca26a31342" satisfied condition "success or failure"
Feb 15 00:50:19.197: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-f4a4b8b1-30bb-4239-ad69-f4ca26a31342 container client-container: 
STEP: delete the pod
Feb 15 00:50:19.293: INFO: Waiting for pod downwardapi-volume-f4a4b8b1-30bb-4239-ad69-f4ca26a31342 to disappear
Feb 15 00:50:19.307: INFO: Pod downwardapi-volume-f4a4b8b1-30bb-4239-ad69-f4ca26a31342 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:50:19.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4671" for this suite.

• [SLOW TEST:13.335 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":156,"skipped":2616,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:50:19.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-map-bdce303d-2015-4a63-b9d5-040c3e1c1489
STEP: Creating a pod to test consume secrets
Feb 15 00:50:19.841: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-185a8774-c076-4f25-9554-5ed55b31e182" in namespace "projected-2623" to be "success or failure"
Feb 15 00:50:19.856: INFO: Pod "pod-projected-secrets-185a8774-c076-4f25-9554-5ed55b31e182": Phase="Pending", Reason="", readiness=false. Elapsed: 15.093348ms
Feb 15 00:50:21.872: INFO: Pod "pod-projected-secrets-185a8774-c076-4f25-9554-5ed55b31e182": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03095741s
Feb 15 00:50:23.883: INFO: Pod "pod-projected-secrets-185a8774-c076-4f25-9554-5ed55b31e182": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041390722s
Feb 15 00:50:25.888: INFO: Pod "pod-projected-secrets-185a8774-c076-4f25-9554-5ed55b31e182": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046618505s
Feb 15 00:50:27.896: INFO: Pod "pod-projected-secrets-185a8774-c076-4f25-9554-5ed55b31e182": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054328982s
STEP: Saw pod success
Feb 15 00:50:27.896: INFO: Pod "pod-projected-secrets-185a8774-c076-4f25-9554-5ed55b31e182" satisfied condition "success or failure"
Feb 15 00:50:27.899: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-185a8774-c076-4f25-9554-5ed55b31e182 container projected-secret-volume-test: 
STEP: delete the pod
Feb 15 00:50:28.078: INFO: Waiting for pod pod-projected-secrets-185a8774-c076-4f25-9554-5ed55b31e182 to disappear
Feb 15 00:50:28.083: INFO: Pod pod-projected-secrets-185a8774-c076-4f25-9554-5ed55b31e182 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:50:28.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2623" for this suite.

• [SLOW TEST:8.741 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":157,"skipped":2622,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:50:28.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: executing a command with run --rm and attach with stdin
Feb 15 00:50:28.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7314 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 15 00:50:36.582: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0215 00:50:35.439817    3076 log.go:172] (0xc000a76b00) (0xc0006bc1e0) Create stream\nI0215 00:50:35.440532    3076 log.go:172] (0xc000a76b00) (0xc0006bc1e0) Stream added, broadcasting: 1\nI0215 00:50:35.449616    3076 log.go:172] (0xc000a76b00) Reply frame received for 1\nI0215 00:50:35.449745    3076 log.go:172] (0xc000a76b00) (0xc0006cbae0) Create stream\nI0215 00:50:35.449767    3076 log.go:172] (0xc000a76b00) (0xc0006cbae0) Stream added, broadcasting: 3\nI0215 00:50:35.454369    3076 log.go:172] (0xc000a76b00) Reply frame received for 3\nI0215 00:50:35.454486    3076 log.go:172] (0xc000a76b00) (0xc0006cbb80) Create stream\nI0215 00:50:35.454515    3076 log.go:172] (0xc000a76b00) (0xc0006cbb80) Stream added, broadcasting: 5\nI0215 00:50:35.460152    3076 log.go:172] (0xc000a76b00) Reply frame received for 5\nI0215 00:50:35.460424    3076 log.go:172] (0xc000a76b00) (0xc0006cbc20) Create stream\nI0215 00:50:35.460565    3076 log.go:172] (0xc000a76b00) (0xc0006cbc20) Stream added, broadcasting: 7\nI0215 00:50:35.462757    3076 log.go:172] (0xc000a76b00) Reply frame received for 7\nI0215 00:50:35.463673    3076 log.go:172] (0xc0006cbae0) (3) Writing data frame\nI0215 00:50:35.463930    3076 log.go:172] (0xc0006cbae0) (3) Writing data frame\nI0215 00:50:35.469259    3076 log.go:172] (0xc000a76b00) Data frame received for 5\nI0215 00:50:35.469317    3076 log.go:172] (0xc0006cbb80) (5) Data frame handling\nI0215 00:50:35.469339    3076 log.go:172] (0xc0006cbb80) (5) Data frame sent\nI0215 00:50:35.473672    3076 log.go:172] (0xc000a76b00) Data frame received for 5\nI0215 00:50:35.473719    3076 log.go:172] (0xc0006cbb80) (5) Data frame handling\nI0215 00:50:35.473739    3076 log.go:172] (0xc0006cbb80) (5) Data frame sent\nI0215 00:50:36.494568    3076 log.go:172] (0xc000a76b00) Data frame received for 1\nI0215 00:50:36.494689    3076 log.go:172] (0xc000a76b00) (0xc0006cbb80) Stream removed, broadcasting: 5\nI0215 00:50:36.495034    3076 log.go:172] (0xc0006bc1e0) (1) Data frame handling\nI0215 00:50:36.495059    3076 log.go:172] (0xc0006bc1e0) (1) Data frame sent\nI0215 00:50:36.495137    3076 log.go:172] (0xc000a76b00) (0xc0006cbae0) Stream removed, broadcasting: 3\nI0215 00:50:36.495183    3076 log.go:172] (0xc000a76b00) (0xc0006bc1e0) Stream removed, broadcasting: 1\nI0215 00:50:36.495810    3076 log.go:172] (0xc000a76b00) (0xc0006cbc20) Stream removed, broadcasting: 7\nI0215 00:50:36.495841    3076 log.go:172] (0xc000a76b00) Go away received\nI0215 00:50:36.496412    3076 log.go:172] (0xc000a76b00) (0xc0006bc1e0) Stream removed, broadcasting: 1\nI0215 00:50:36.496463    3076 log.go:172] (0xc000a76b00) (0xc0006cbae0) Stream removed, broadcasting: 3\nI0215 00:50:36.496482    3076 log.go:172] (0xc000a76b00) (0xc0006cbb80) Stream removed, broadcasting: 5\nI0215 00:50:36.496510    3076 log.go:172] (0xc000a76b00) (0xc0006cbc20) Stream removed, broadcasting: 7\n"
Feb 15 00:50:36.583: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:50:38.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7314" for this suite.

• [SLOW TEST:10.447 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1946
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":280,"completed":158,"skipped":2629,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:50:38.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating Agnhost RC
Feb 15 00:50:38.782: INFO: namespace kubectl-6123
Feb 15 00:50:38.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6123'
Feb 15 00:50:39.326: INFO: stderr: ""
Feb 15 00:50:39.326: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 15 00:50:40.346: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 00:50:40.346: INFO: Found 0 / 1
Feb 15 00:50:41.336: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 00:50:41.336: INFO: Found 0 / 1
Feb 15 00:50:42.364: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 00:50:42.364: INFO: Found 0 / 1
Feb 15 00:50:43.336: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 00:50:43.336: INFO: Found 0 / 1
Feb 15 00:50:44.333: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 00:50:44.333: INFO: Found 0 / 1
Feb 15 00:50:45.334: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 00:50:45.334: INFO: Found 1 / 1
Feb 15 00:50:45.334: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 15 00:50:45.341: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 00:50:45.341: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 15 00:50:45.341: INFO: wait on agnhost-master startup in kubectl-6123 
Feb 15 00:50:45.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-gskht agnhost-master --namespace=kubectl-6123'
Feb 15 00:50:45.565: INFO: stderr: ""
Feb 15 00:50:45.566: INFO: stdout: "Paused\n"
STEP: exposing RC
Feb 15 00:50:45.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6123'
Feb 15 00:50:45.813: INFO: stderr: ""
Feb 15 00:50:45.813: INFO: stdout: "service/rm2 exposed\n"
Feb 15 00:50:45.817: INFO: Service rm2 in namespace kubectl-6123 found.
STEP: exposing service
Feb 15 00:50:47.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6123'
Feb 15 00:50:48.099: INFO: stderr: ""
Feb 15 00:50:48.099: INFO: stdout: "service/rm3 exposed\n"
Feb 15 00:50:48.105: INFO: Service rm3 in namespace kubectl-6123 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:50:50.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6123" for this suite.

• [SLOW TEST:11.512 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1297
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":280,"completed":159,"skipped":2656,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:50:50.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 15 00:50:50.430: INFO: Waiting up to 5m0s for pod "pod-27e9a6c4-2d1e-4910-9864-a6693ba2bfd9" in namespace "emptydir-9679" to be "success or failure"
Feb 15 00:50:50.449: INFO: Pod "pod-27e9a6c4-2d1e-4910-9864-a6693ba2bfd9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.622756ms
Feb 15 00:50:52.477: INFO: Pod "pod-27e9a6c4-2d1e-4910-9864-a6693ba2bfd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046536702s
Feb 15 00:50:54.485: INFO: Pod "pod-27e9a6c4-2d1e-4910-9864-a6693ba2bfd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054385767s
Feb 15 00:50:56.526: INFO: Pod "pod-27e9a6c4-2d1e-4910-9864-a6693ba2bfd9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096023555s
Feb 15 00:50:58.546: INFO: Pod "pod-27e9a6c4-2d1e-4910-9864-a6693ba2bfd9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115038189s
Feb 15 00:51:00.558: INFO: Pod "pod-27e9a6c4-2d1e-4910-9864-a6693ba2bfd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.127841988s
STEP: Saw pod success
Feb 15 00:51:00.559: INFO: Pod "pod-27e9a6c4-2d1e-4910-9864-a6693ba2bfd9" satisfied condition "success or failure"
Feb 15 00:51:00.563: INFO: Trying to get logs from node jerma-node pod pod-27e9a6c4-2d1e-4910-9864-a6693ba2bfd9 container test-container: 
STEP: delete the pod
Feb 15 00:51:00.687: INFO: Waiting for pod pod-27e9a6c4-2d1e-4910-9864-a6693ba2bfd9 to disappear
Feb 15 00:51:00.758: INFO: Pod pod-27e9a6c4-2d1e-4910-9864-a6693ba2bfd9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:51:00.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9679" for this suite.

• [SLOW TEST:10.646 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":160,"skipped":2692,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:51:00.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1694
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 15 00:51:00.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6974'
Feb 15 00:51:01.223: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 15 00:51:01.223: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: rolling-update to same image controller
Feb 15 00:51:01.361: INFO: scanned /root for discovery docs: 
Feb 15 00:51:01.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6974'
Feb 15 00:51:23.829: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 15 00:51:23.830: INFO: stdout: "Created e2e-test-httpd-rc-0a1b692914106465279b3cdd874ff335\nScaling up e2e-test-httpd-rc-0a1b692914106465279b3cdd874ff335 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-0a1b692914106465279b3cdd874ff335 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-0a1b692914106465279b3cdd874ff335 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Feb 15 00:51:23.830: INFO: stdout: "Created e2e-test-httpd-rc-0a1b692914106465279b3cdd874ff335\nScaling up e2e-test-httpd-rc-0a1b692914106465279b3cdd874ff335 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-0a1b692914106465279b3cdd874ff335 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-0a1b692914106465279b3cdd874ff335 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Feb 15 00:51:23.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-6974'
Feb 15 00:51:24.075: INFO: stderr: ""
Feb 15 00:51:24.075: INFO: stdout: "e2e-test-httpd-rc-0a1b692914106465279b3cdd874ff335-c9nng "
Feb 15 00:51:24.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-0a1b692914106465279b3cdd874ff335-c9nng -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6974'
Feb 15 00:51:24.251: INFO: stderr: ""
Feb 15 00:51:24.251: INFO: stdout: "true"
Feb 15 00:51:24.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-0a1b692914106465279b3cdd874ff335-c9nng -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6974'
Feb 15 00:51:24.341: INFO: stderr: ""
Feb 15 00:51:24.341: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Feb 15 00:51:24.341: INFO: e2e-test-httpd-rc-0a1b692914106465279b3cdd874ff335-c9nng is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1700
Feb 15 00:51:24.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6974'
Feb 15 00:51:24.456: INFO: stderr: ""
Feb 15 00:51:24.457: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:51:24.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6974" for this suite.

• [SLOW TEST:23.686 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1689
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":280,"completed":161,"skipped":2701,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:51:24.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-6578
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a new StatefulSet
Feb 15 00:51:24.624: INFO: Found 0 stateful pods, waiting for 3
Feb 15 00:51:34.779: INFO: Found 2 stateful pods, waiting for 3
Feb 15 00:51:44.632: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 00:51:44.633: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 00:51:44.633: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 15 00:51:54.636: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 00:51:54.636: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 00:51:54.636: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb 15 00:51:54.665: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 15 00:52:04.731: INFO: Updating stateful set ss2
Feb 15 00:52:04.762: INFO: Waiting for Pod statefulset-6578/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Feb 15 00:52:15.143: INFO: Found 2 stateful pods, waiting for 3
Feb 15 00:52:25.150: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 00:52:25.150: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 00:52:25.150: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 15 00:52:35.151: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 00:52:35.151: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 00:52:35.151: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 15 00:52:35.180: INFO: Updating stateful set ss2
Feb 15 00:52:35.225: INFO: Waiting for Pod statefulset-6578/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 15 00:52:45.236: INFO: Waiting for Pod statefulset-6578/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 15 00:52:55.252: INFO: Updating stateful set ss2
Feb 15 00:52:55.366: INFO: Waiting for StatefulSet statefulset-6578/ss2 to complete update
Feb 15 00:52:55.366: INFO: Waiting for Pod statefulset-6578/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 15 00:53:05.382: INFO: Waiting for StatefulSet statefulset-6578/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 15 00:53:15.387: INFO: Deleting all statefulset in ns statefulset-6578
Feb 15 00:53:15.392: INFO: Scaling statefulset ss2 to 0
Feb 15 00:53:45.472: INFO: Waiting for statefulset status.replicas updated to 0
Feb 15 00:53:45.477: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:53:45.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6578" for this suite.

• [SLOW TEST:141.100 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":280,"completed":162,"skipped":2737,"failed":0}
S
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:53:45.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 15 00:53:45.721: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 15 00:53:50.740: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:53:51.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7437" for this suite.

• [SLOW TEST:6.428 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":280,"completed":163,"skipped":2738,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:53:52.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb 15 00:53:53.172: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Feb 15 00:53:55.191: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:53:57.264: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:53:59.201: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:54:01.315: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:54:03.198: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 00:54:05.198: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717324833, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 15 00:54:08.217: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 00:54:08.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 00:54:12.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-1708" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:20.730 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":280,"completed":164,"skipped":2749,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 00:54:12.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-5643
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-5643
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5643
Feb 15 00:54:12.895: INFO: Found 0 stateful pods, waiting for 1
Feb 15 00:54:22.905: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 15 00:54:22.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 15 00:54:23.274: INFO: stderr: "I0215 00:54:23.052842    3308 log.go:172] (0xc0009926e0) (0xc000a14000) Create stream\nI0215 00:54:23.052967    3308 log.go:172] (0xc0009926e0) (0xc000a14000) Stream added, broadcasting: 1\nI0215 00:54:23.059439    3308 log.go:172] (0xc0009926e0) Reply frame received for 1\nI0215 00:54:23.059468    3308 log.go:172] (0xc0009926e0) (0xc00070dae0) Create stream\nI0215 00:54:23.059477    3308 log.go:172] (0xc0009926e0) (0xc00070dae0) Stream added, broadcasting: 3\nI0215 00:54:23.060477    3308 log.go:172] (0xc0009926e0) Reply frame received for 3\nI0215 00:54:23.060507    3308 log.go:172] (0xc0009926e0) (0xc00022a000) Create stream\nI0215 00:54:23.060521    3308 log.go:172] (0xc0009926e0) (0xc00022a000) Stream added, broadcasting: 5\nI0215 00:54:23.061669    3308 log.go:172] (0xc0009926e0) Reply frame received for 5\nI0215 00:54:23.123027    3308 log.go:172] (0xc0009926e0) Data frame received for 5\nI0215 00:54:23.123091    3308 log.go:172] (0xc00022a000) (5) Data frame handling\nI0215 00:54:23.123111    3308 log.go:172] (0xc00022a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0215 00:54:23.149686    3308 log.go:172] (0xc0009926e0) Data frame received for 3\nI0215 00:54:23.149716    3308 log.go:172] (0xc00070dae0) (3) Data frame handling\nI0215 00:54:23.149734    3308 log.go:172] (0xc00070dae0) (3) Data frame sent\nI0215 00:54:23.259002    3308 log.go:172] (0xc0009926e0) Data frame received for 1\nI0215 00:54:23.259065    3308 log.go:172] (0xc000a14000) (1) Data frame handling\nI0215 00:54:23.259091    3308 log.go:172] (0xc000a14000) (1) Data frame sent\nI0215 00:54:23.259128    3308 log.go:172] (0xc0009926e0) (0xc000a14000) Stream removed, broadcasting: 1\nI0215 00:54:23.260663    3308 log.go:172] (0xc0009926e0) (0xc00070dae0) Stream removed, broadcasting: 3\nI0215 00:54:23.260985    3308 log.go:172] (0xc0009926e0) (0xc00022a000) Stream removed, broadcasting: 5\nI0215 00:54:23.261092    3308 log.go:172] (0xc0009926e0) Go away received\nI0215 00:54:23.261189    3308 log.go:172] (0xc0009926e0) (0xc000a14000) Stream removed, broadcasting: 1\nI0215 00:54:23.261229    3308 log.go:172] (0xc0009926e0) (0xc00070dae0) Stream removed, broadcasting: 3\nI0215 00:54:23.261242    3308 log.go:172] (0xc0009926e0) (0xc00022a000) Stream removed, broadcasting: 5\n"
Feb 15 00:54:23.275: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 15 00:54:23.275: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 15 00:54:23.285: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 15 00:54:33.291: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 15 00:54:33.291: INFO: Waiting for statefulset status.replicas updated to 0
Feb 15 00:54:33.314: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999162s
Feb 15 00:54:34.321: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990686839s
Feb 15 00:54:35.328: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.983687211s
Feb 15 00:54:36.335: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.977273722s
Feb 15 00:54:37.343: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.969925399s
Feb 15 00:54:38.352: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.962229525s
Feb 15 00:54:39.360: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.953148611s
Feb 15 00:54:40.376: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.944785792s
Feb 15 00:54:41.384: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.928517199s
Feb 15 00:54:42.391: INFO: Verifying statefulset ss doesn't scale past 1 for another 921.049175ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5643
Feb 15 00:54:43.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:54:43.883: INFO: stderr: "I0215 00:54:43.626507    3328 log.go:172] (0xc0009d4000) (0xc0006c7b80) Create stream\nI0215 00:54:43.626777    3328 log.go:172] (0xc0009d4000) (0xc0006c7b80) Stream added, broadcasting: 1\nI0215 00:54:43.642814    3328 log.go:172] (0xc0009d4000) Reply frame received for 1\nI0215 00:54:43.642898    3328 log.go:172] (0xc0009d4000) (0xc000a0a000) Create stream\nI0215 00:54:43.642929    3328 log.go:172] (0xc0009d4000) (0xc000a0a000) Stream added, broadcasting: 3\nI0215 00:54:43.647221    3328 log.go:172] (0xc0009d4000) Reply frame received for 3\nI0215 00:54:43.647342    3328 log.go:172] (0xc0009d4000) (0xc0003bd400) Create stream\nI0215 00:54:43.647360    3328 log.go:172] (0xc0009d4000) (0xc0003bd400) Stream added, broadcasting: 5\nI0215 00:54:43.649913    3328 log.go:172] (0xc0009d4000) Reply frame received for 5\nI0215 00:54:43.733110    3328 log.go:172] (0xc0009d4000) Data frame received for 3\nI0215 00:54:43.733252    3328 log.go:172] (0xc0009d4000) Data frame received for 5\nI0215 00:54:43.733292    3328 log.go:172] (0xc0003bd400) (5) Data frame handling\nI0215 00:54:43.733317    3328 log.go:172] (0xc0003bd400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0215 00:54:43.733343    3328 log.go:172] (0xc000a0a000) (3) Data frame handling\nI0215 00:54:43.733353    3328 log.go:172] (0xc000a0a000) (3) Data frame sent\nI0215 00:54:43.858694    3328 log.go:172] (0xc0009d4000) (0xc000a0a000) Stream removed, broadcasting: 3\nI0215 00:54:43.858909    3328 log.go:172] (0xc0009d4000) Data frame received for 1\nI0215 00:54:43.858965    3328 log.go:172] (0xc0009d4000) (0xc0003bd400) Stream removed, broadcasting: 5\nI0215 00:54:43.859023    3328 log.go:172] (0xc0006c7b80) (1) Data frame handling\nI0215 00:54:43.859061    3328 log.go:172] (0xc0006c7b80) (1) Data frame sent\nI0215 00:54:43.859070    3328 log.go:172] (0xc0009d4000) (0xc0006c7b80) Stream removed, broadcasting: 1\nI0215 00:54:43.859106    3328 log.go:172] (0xc0009d4000) Go away received\nI0215 00:54:43.861190    3328 log.go:172] (0xc0009d4000) (0xc0006c7b80) Stream removed, broadcasting: 1\nI0215 00:54:43.861212    3328 log.go:172] (0xc0009d4000) (0xc000a0a000) Stream removed, broadcasting: 3\nI0215 00:54:43.861234    3328 log.go:172] (0xc0009d4000) (0xc0003bd400) Stream removed, broadcasting: 5\n"
Feb 15 00:54:43.883: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 15 00:54:43.883: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 15 00:54:43.943: INFO: Found 2 stateful pods, waiting for 3
Feb 15 00:54:53.953: INFO: Found 2 stateful pods, waiting for 3
Feb 15 00:55:03.954: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 00:55:03.954: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 00:55:03.954: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false
Feb 15 00:55:13.957: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 00:55:13.957: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 00:55:13.957: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 15 00:55:13.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 15 00:55:14.457: INFO: stderr: "I0215 00:55:14.242961    3350 log.go:172] (0xc000a48bb0) (0xc000acc3c0) Create stream\nI0215 00:55:14.243122    3350 log.go:172] (0xc000a48bb0) (0xc000acc3c0) Stream added, broadcasting: 1\nI0215 00:55:14.247133    3350 log.go:172] (0xc000a48bb0) Reply frame received for 1\nI0215 00:55:14.247195    3350 log.go:172] (0xc000a48bb0) (0xc000acc460) Create stream\nI0215 00:55:14.247201    3350 log.go:172] (0xc000a48bb0) (0xc000acc460) Stream added, broadcasting: 3\nI0215 00:55:14.248501    3350 log.go:172] (0xc000a48bb0) Reply frame received for 3\nI0215 00:55:14.248537    3350 log.go:172] (0xc000a48bb0) (0xc0009680a0) Create stream\nI0215 00:55:14.248559    3350 log.go:172] (0xc000a48bb0) (0xc0009680a0) Stream added, broadcasting: 5\nI0215 00:55:14.249873    3350 log.go:172] (0xc000a48bb0) Reply frame received for 5\nI0215 00:55:14.344015    3350 log.go:172] (0xc000a48bb0) Data frame received for 3\nI0215 00:55:14.344176    3350 log.go:172] (0xc000acc460) (3) Data frame handling\nI0215 00:55:14.344221    3350 log.go:172] (0xc000acc460) (3) Data frame sent\nI0215 00:55:14.344386    3350 log.go:172] (0xc000a48bb0) Data frame received for 5\nI0215 00:55:14.344409    3350 log.go:172] (0xc0009680a0) (5) Data frame handling\nI0215 00:55:14.344438    3350 log.go:172] (0xc0009680a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0215 00:55:14.438518    3350 log.go:172] (0xc000a48bb0) (0xc000acc460) Stream removed, broadcasting: 3\nI0215 00:55:14.439110    3350 log.go:172] (0xc000a48bb0) Data frame received for 1\nI0215 00:55:14.439161    3350 log.go:172] (0xc000acc3c0) (1) Data frame handling\nI0215 00:55:14.439182    3350 log.go:172] (0xc000acc3c0) (1) Data frame sent\nI0215 00:55:14.439194    3350 log.go:172] (0xc000a48bb0) (0xc000acc3c0) Stream removed, broadcasting: 1\nI0215 00:55:14.439826    3350 log.go:172] (0xc000a48bb0) (0xc0009680a0) Stream removed, broadcasting: 5\nI0215 00:55:14.440088    3350 log.go:172] (0xc000a48bb0) Go away received\nI0215 00:55:14.441122    3350 log.go:172] (0xc000a48bb0) (0xc000acc3c0) Stream removed, broadcasting: 1\nI0215 00:55:14.441151    3350 log.go:172] (0xc000a48bb0) (0xc000acc460) Stream removed, broadcasting: 3\nI0215 00:55:14.441158    3350 log.go:172] (0xc000a48bb0) (0xc0009680a0) Stream removed, broadcasting: 5\n"
Feb 15 00:55:14.457: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 15 00:55:14.457: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 15 00:55:14.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 15 00:55:14.998: INFO: stderr: "I0215 00:55:14.684232    3370 log.go:172] (0xc000a193f0) (0xc000a046e0) Create stream\nI0215 00:55:14.684501    3370 log.go:172] (0xc000a193f0) (0xc000a046e0) Stream added, broadcasting: 1\nI0215 00:55:14.699884    3370 log.go:172] (0xc000a193f0) Reply frame received for 1\nI0215 00:55:14.699997    3370 log.go:172] (0xc000a193f0) (0xc000a04000) Create stream\nI0215 00:55:14.700021    3370 log.go:172] (0xc000a193f0) (0xc000a04000) Stream added, broadcasting: 3\nI0215 00:55:14.701407    3370 log.go:172] (0xc000a193f0) Reply frame received for 3\nI0215 00:55:14.701531    3370 log.go:172] (0xc000a193f0) (0xc00050b360) Create stream\nI0215 00:55:14.701551    3370 log.go:172] (0xc000a193f0) (0xc00050b360) Stream added, broadcasting: 5\nI0215 00:55:14.703448    3370 log.go:172] (0xc000a193f0) Reply frame received for 5\nI0215 00:55:14.799505    3370 log.go:172] (0xc000a193f0) Data frame received for 5\nI0215 00:55:14.799621    3370 log.go:172] (0xc00050b360) (5) Data frame handling\nI0215 00:55:14.799658    3370 log.go:172] (0xc00050b360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0215 00:55:14.862897    3370 log.go:172] (0xc000a193f0) Data frame received for 3\nI0215 00:55:14.862984    3370 log.go:172] (0xc000a04000) (3) Data frame handling\nI0215 00:55:14.863036    3370 log.go:172] (0xc000a04000) (3) Data frame sent\nI0215 00:55:14.980413    3370 log.go:172] (0xc000a193f0) Data frame received for 1\nI0215 00:55:14.980466    3370 log.go:172] (0xc000a046e0) (1) Data frame handling\nI0215 00:55:14.980488    3370 log.go:172] (0xc000a046e0) (1) Data frame sent\nI0215 00:55:14.980511    3370 log.go:172] (0xc000a193f0) (0xc000a046e0) Stream removed, broadcasting: 1\nI0215 00:55:14.981451    3370 log.go:172] (0xc000a193f0) (0xc000a04000) Stream removed, broadcasting: 3\nI0215 00:55:14.981910    3370 log.go:172] (0xc000a193f0) (0xc00050b360) Stream removed, broadcasting: 5\nI0215 00:55:14.982347    3370 log.go:172] (0xc000a193f0) Go away received\nI0215 00:55:14.982705    3370 log.go:172] (0xc000a193f0) (0xc000a046e0) Stream removed, broadcasting: 1\nI0215 00:55:14.982770    3370 log.go:172] (0xc000a193f0) (0xc000a04000) Stream removed, broadcasting: 3\nI0215 00:55:14.982781    3370 log.go:172] (0xc000a193f0) (0xc00050b360) Stream removed, broadcasting: 5\n"
Feb 15 00:55:14.998: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 15 00:55:14.998: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 15 00:55:14.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 15 00:55:15.387: INFO: stderr: "I0215 00:55:15.203299    3391 log.go:172] (0xc00001efd0) (0xc000af0000) Create stream\nI0215 00:55:15.203401    3391 log.go:172] (0xc00001efd0) (0xc000af0000) Stream added, broadcasting: 1\nI0215 00:55:15.207870    3391 log.go:172] (0xc00001efd0) Reply frame received for 1\nI0215 00:55:15.207953    3391 log.go:172] (0xc00001efd0) (0xc00066dcc0) Create stream\nI0215 00:55:15.207968    3391 log.go:172] (0xc00001efd0) (0xc00066dcc0) Stream added, broadcasting: 3\nI0215 00:55:15.209324    3391 log.go:172] (0xc00001efd0) Reply frame received for 3\nI0215 00:55:15.209354    3391 log.go:172] (0xc00001efd0) (0xc0002ec000) Create stream\nI0215 00:55:15.209362    3391 log.go:172] (0xc00001efd0) (0xc0002ec000) Stream added, broadcasting: 5\nI0215 00:55:15.210915    3391 log.go:172] (0xc00001efd0) Reply frame received for 5\nI0215 00:55:15.266132    3391 log.go:172] (0xc00001efd0) Data frame received for 5\nI0215 00:55:15.266206    3391 log.go:172] (0xc0002ec000) (5) Data frame handling\nI0215 00:55:15.266244    3391 log.go:172] (0xc0002ec000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0215 00:55:15.309475    3391 log.go:172] (0xc00001efd0) Data frame received for 3\nI0215 00:55:15.309511    3391 log.go:172] (0xc00066dcc0) (3) Data frame handling\nI0215 00:55:15.309525    3391 log.go:172] (0xc00066dcc0) (3) Data frame sent\nI0215 00:55:15.374221    3391 log.go:172] (0xc00001efd0) Data frame received for 1\nI0215 00:55:15.374291    3391 log.go:172] (0xc000af0000) (1) Data frame handling\nI0215 00:55:15.374323    3391 log.go:172] (0xc000af0000) (1) Data frame sent\nI0215 00:55:15.374603    3391 log.go:172] (0xc00001efd0) (0xc000af0000) Stream removed, broadcasting: 1\nI0215 00:55:15.376387    3391 log.go:172] (0xc00001efd0) (0xc00066dcc0) Stream removed, broadcasting: 3\nI0215 00:55:15.376913    3391 log.go:172] (0xc00001efd0) (0xc0002ec000) Stream removed, broadcasting: 5\nI0215 00:55:15.376996    3391 log.go:172] (0xc00001efd0) (0xc000af0000) Stream removed, broadcasting: 1\nI0215 00:55:15.377019    3391 log.go:172] (0xc00001efd0) (0xc00066dcc0) Stream removed, broadcasting: 3\nI0215 00:55:15.377030    3391 log.go:172] (0xc00001efd0) (0xc0002ec000) Stream removed, broadcasting: 5\nI0215 00:55:15.377160    3391 log.go:172] (0xc00001efd0) Go away received\n"
Feb 15 00:55:15.387: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 15 00:55:15.387: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 15 00:55:15.387: INFO: Waiting for statefulset status.replicas updated to 0
Feb 15 00:55:15.393: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb 15 00:55:25.405: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 15 00:55:25.406: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 15 00:55:25.406: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 15 00:55:25.426: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999501s
Feb 15 00:55:26.433: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98842602s
Feb 15 00:55:27.441: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.981146591s
Feb 15 00:55:29.925: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.973385863s
Feb 15 00:55:30.945: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.488936322s
Feb 15 00:55:31.958: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.469170162s
Feb 15 00:55:32.969: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.45606058s
Feb 15 00:55:33.985: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.445171829s
Feb 15 00:55:34.991: INFO: Verifying statefulset ss doesn't scale past 3 for another 429.149336ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5643
Feb 15 00:55:36.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:55:36.420: INFO: stderr: "I0215 00:55:36.227862    3413 log.go:172] (0xc000a25550) (0xc0009fa500) Create stream\nI0215 00:55:36.228067    3413 log.go:172] (0xc000a25550) (0xc0009fa500) Stream added, broadcasting: 1\nI0215 00:55:36.236437    3413 log.go:172] (0xc000a25550) Reply frame received for 1\nI0215 00:55:36.236535    3413 log.go:172] (0xc000a25550) (0xc000440820) Create stream\nI0215 00:55:36.236569    3413 log.go:172] (0xc000a25550) (0xc000440820) Stream added, broadcasting: 3\nI0215 00:55:36.241518    3413 log.go:172] (0xc000a25550) Reply frame received for 3\nI0215 00:55:36.241626    3413 log.go:172] (0xc000a25550) (0xc00098a1e0) Create stream\nI0215 00:55:36.241639    3413 log.go:172] (0xc000a25550) (0xc00098a1e0) Stream added, broadcasting: 5\nI0215 00:55:36.243633    3413 log.go:172] (0xc000a25550) Reply frame received for 5\nI0215 00:55:36.335750    3413 log.go:172] (0xc000a25550) Data frame received for 5\nI0215 00:55:36.335896    3413 log.go:172] (0xc00098a1e0) (5) Data frame handling\nI0215 00:55:36.335931    3413 log.go:172] (0xc00098a1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0215 00:55:36.336139    3413 log.go:172] (0xc000a25550) Data frame received for 3\nI0215 00:55:36.336160    3413 log.go:172] (0xc000440820) (3) Data frame handling\nI0215 00:55:36.336195    3413 log.go:172] (0xc000440820) (3) Data frame sent\nI0215 00:55:36.409409    3413 log.go:172] (0xc000a25550) Data frame received for 1\nI0215 00:55:36.409514    3413 log.go:172] (0xc000a25550) (0xc000440820) Stream removed, broadcasting: 3\nI0215 00:55:36.409684    3413 log.go:172] (0xc0009fa500) (1) Data frame handling\nI0215 00:55:36.409741    3413 log.go:172] (0xc0009fa500) (1) Data frame sent\nI0215 00:55:36.409795    3413 log.go:172] (0xc000a25550) (0xc00098a1e0) Stream removed, broadcasting: 5\nI0215 00:55:36.409837    3413 log.go:172] (0xc000a25550) (0xc0009fa500) Stream removed, broadcasting: 1\nI0215 00:55:36.409865    3413 log.go:172] (0xc000a25550) Go away received\nI0215 00:55:36.410931    3413 log.go:172] (0xc000a25550) (0xc0009fa500) Stream removed, broadcasting: 1\nI0215 00:55:36.410959    3413 log.go:172] (0xc000a25550) (0xc000440820) Stream removed, broadcasting: 3\nI0215 00:55:36.410966    3413 log.go:172] (0xc000a25550) (0xc00098a1e0) Stream removed, broadcasting: 5\n"
Feb 15 00:55:36.420: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 15 00:55:36.420: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 15 00:55:36.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:55:36.821: INFO: stderr: "I0215 00:55:36.623658    3434 log.go:172] (0xc0009d8420) (0xc000a763c0) Create stream\nI0215 00:55:36.623848    3434 log.go:172] (0xc0009d8420) (0xc000a763c0) Stream added, broadcasting: 1\nI0215 00:55:36.627661    3434 log.go:172] (0xc0009d8420) Reply frame received for 1\nI0215 00:55:36.627706    3434 log.go:172] (0xc0009d8420) (0xc000a4c320) Create stream\nI0215 00:55:36.627718    3434 log.go:172] (0xc0009d8420) (0xc000a4c320) Stream added, broadcasting: 3\nI0215 00:55:36.628740    3434 log.go:172] (0xc0009d8420) Reply frame received for 3\nI0215 00:55:36.628774    3434 log.go:172] (0xc0009d8420) (0xc0009840a0) Create stream\nI0215 00:55:36.628788    3434 log.go:172] (0xc0009d8420) (0xc0009840a0) Stream added, broadcasting: 5\nI0215 00:55:36.629542    3434 log.go:172] (0xc0009d8420) Reply frame received for 5\nI0215 00:55:36.711684    3434 log.go:172] (0xc0009d8420) Data frame received for 3\nI0215 00:55:36.711759    3434 log.go:172] (0xc000a4c320) (3) Data frame handling\nI0215 00:55:36.711778    3434 log.go:172] (0xc000a4c320) (3) Data frame sent\nI0215 00:55:36.711816    3434 log.go:172] (0xc0009d8420) Data frame received for 5\nI0215 00:55:36.711824    3434 log.go:172] (0xc0009840a0) (5) Data frame handling\nI0215 00:55:36.711848    3434 log.go:172] (0xc0009840a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0215 00:55:36.806098    3434 log.go:172] (0xc0009d8420) (0xc000a4c320) Stream removed, broadcasting: 3\nI0215 00:55:36.806393    3434 log.go:172] (0xc0009d8420) Data frame received for 1\nI0215 00:55:36.806411    3434 log.go:172] (0xc000a763c0) (1) Data frame handling\nI0215 00:55:36.806433    3434 log.go:172] (0xc000a763c0) (1) Data frame sent\nI0215 00:55:36.806475    3434 log.go:172] (0xc0009d8420) (0xc000a763c0) Stream removed, broadcasting: 1\nI0215 00:55:36.806590    3434 log.go:172] (0xc0009d8420) (0xc0009840a0) Stream removed, broadcasting: 5\nI0215 00:55:36.806678    3434 log.go:172] (0xc0009d8420) Go away received\nI0215 00:55:36.808013    3434 log.go:172] (0xc0009d8420) (0xc000a763c0) Stream removed, broadcasting: 1\nI0215 00:55:36.808066    3434 log.go:172] (0xc0009d8420) (0xc000a4c320) Stream removed, broadcasting: 3\nI0215 00:55:36.808096    3434 log.go:172] (0xc0009d8420) (0xc0009840a0) Stream removed, broadcasting: 5\n"
Feb 15 00:55:36.822: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 15 00:55:36.822: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 15 00:55:36.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:55:37.093: INFO: rc: 126
Feb 15 00:55:37.094: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:
cannot exec in a stopped state: unknown

stderr:
I0215 00:55:37.018182    3454 log.go:172] (0xc000a68d10) (0xc000b58140) Create stream
I0215 00:55:37.018910    3454 log.go:172] (0xc000a68d10) (0xc000b58140) Stream added, broadcasting: 1
I0215 00:55:37.030787    3454 log.go:172] (0xc000a68d10) Reply frame received for 1
I0215 00:55:37.031356    3454 log.go:172] (0xc000a68d10) (0xc000b500a0) Create stream
I0215 00:55:37.031435    3454 log.go:172] (0xc000a68d10) (0xc000b500a0) Stream added, broadcasting: 3
I0215 00:55:37.035669    3454 log.go:172] (0xc000a68d10) Reply frame received for 3
I0215 00:55:37.035725    3454 log.go:172] (0xc000a68d10) (0xc000b50140) Create stream
I0215 00:55:37.035750    3454 log.go:172] (0xc000a68d10) (0xc000b50140) Stream added, broadcasting: 5
I0215 00:55:37.040056    3454 log.go:172] (0xc000a68d10) Reply frame received for 5
I0215 00:55:37.083840    3454 log.go:172] (0xc000a68d10) Data frame received for 3
I0215 00:55:37.084028    3454 log.go:172] (0xc000b500a0) (3) Data frame handling
I0215 00:55:37.084090    3454 log.go:172] (0xc000b500a0) (3) Data frame sent
I0215 00:55:37.084571    3454 log.go:172] (0xc000a68d10) (0xc000b500a0) Stream removed, broadcasting: 3
I0215 00:55:37.084723    3454 log.go:172] (0xc000a68d10) Data frame received for 1
I0215 00:55:37.084731    3454 log.go:172] (0xc000b58140) (1) Data frame handling
I0215 00:55:37.084744    3454 log.go:172] (0xc000b58140) (1) Data frame sent
I0215 00:55:37.084751    3454 log.go:172] (0xc000a68d10) (0xc000b58140) Stream removed, broadcasting: 1
I0215 00:55:37.085082    3454 log.go:172] (0xc000a68d10) (0xc000b50140) Stream removed, broadcasting: 5
I0215 00:55:37.085141    3454 log.go:172] (0xc000a68d10) Go away received
I0215 00:55:37.085832    3454 log.go:172] (0xc000a68d10) (0xc000b58140) Stream removed, broadcasting: 1
I0215 00:55:37.085847    3454 log.go:172] (0xc000a68d10) (0xc000b500a0) Stream removed, broadcasting: 3
I0215 00:55:37.085852    3454 log.go:172] (0xc000a68d10) (0xc000b50140) Stream removed, broadcasting: 5
command terminated with exit code 126

error:
exit status 126
Feb 15 00:55:47.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:55:47.316: INFO: rc: 1
Feb 15 00:55:47.316: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Feb 15 00:55:57.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:55:57.497: INFO: rc: 1
Feb 15 00:55:57.497: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:56:07.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:56:07.689: INFO: rc: 1
Feb 15 00:56:07.690: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:56:17.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:56:17.959: INFO: rc: 1
Feb 15 00:56:17.959: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:56:27.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:56:28.145: INFO: rc: 1
Feb 15 00:56:28.145: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:56:38.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:56:38.339: INFO: rc: 1
Feb 15 00:56:38.339: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:56:48.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:56:48.554: INFO: rc: 1
Feb 15 00:56:48.554: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:56:58.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:56:58.704: INFO: rc: 1
Feb 15 00:56:58.704: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:57:08.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:57:08.842: INFO: rc: 1
Feb 15 00:57:08.842: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:57:18.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:57:19.033: INFO: rc: 1
Feb 15 00:57:19.033: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:57:29.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:57:29.218: INFO: rc: 1
Feb 15 00:57:29.218: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:57:39.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:57:39.441: INFO: rc: 1
Feb 15 00:57:39.442: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:57:49.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:57:49.585: INFO: rc: 1
Feb 15 00:57:49.585: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:57:59.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:57:59.775: INFO: rc: 1
Feb 15 00:57:59.776: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:58:09.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:58:09.983: INFO: rc: 1
Feb 15 00:58:09.983: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:58:19.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:58:20.147: INFO: rc: 1
Feb 15 00:58:20.147: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:58:30.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:58:30.977: INFO: rc: 1
Feb 15 00:58:30.978: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:58:40.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:58:41.234: INFO: rc: 1
Feb 15 00:58:41.234: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:58:51.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:58:51.429: INFO: rc: 1
Feb 15 00:58:51.430: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:59:01.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:59:04.265: INFO: rc: 1
Feb 15 00:59:04.266: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:59:14.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:59:14.376: INFO: rc: 1
Feb 15 00:59:14.377: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:59:24.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:59:24.525: INFO: rc: 1
Feb 15 00:59:24.525: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:59:34.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:59:34.672: INFO: rc: 1
Feb 15 00:59:34.673: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:59:44.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:59:44.859: INFO: rc: 1
Feb 15 00:59:44.860: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 00:59:54.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 00:59:54.959: INFO: rc: 1
Feb 15 00:59:54.959: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 01:00:04.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 01:00:05.166: INFO: rc: 1
Feb 15 01:00:05.166: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 01:00:15.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 01:00:15.310: INFO: rc: 1
Feb 15 01:00:15.311: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 01:00:25.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 01:00:25.508: INFO: rc: 1
Feb 15 01:00:25.509: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 01:00:35.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 01:00:35.679: INFO: rc: 1
Feb 15 01:00:35.680: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 15 01:00:45.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 15 01:00:45.882: INFO: rc: 1
Feb 15 01:00:45.883: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Feb 15 01:00:45.883: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 15 01:00:45.898: INFO: Deleting all statefulset in ns statefulset-5643
Feb 15 01:00:45.902: INFO: Scaling statefulset ss to 0
Feb 15 01:00:45.912: INFO: Waiting for statefulset status.replicas updated to 0
Feb 15 01:00:45.914: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:00:45.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5643" for this suite.

• [SLOW TEST:393.221 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":280,"completed":165,"skipped":2767,"failed":0}
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:00:45.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service nodeport-service with the type=NodePort in namespace services-4830
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-4830
STEP: creating replication controller externalsvc in namespace services-4830
I0215 01:00:46.194594      10 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4830, replica count: 2
I0215 01:00:49.246535      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:00:52.247023      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:00:55.247543      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Feb 15 01:00:55.342: INFO: Creating new exec pod
Feb 15 01:01:01.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4830 execpodh4l4q -- /bin/sh -x -c nslookup nodeport-service'
Feb 15 01:01:01.839: INFO: stderr: "I0215 01:01:01.600819    4076 log.go:172] (0xc000a9c580) (0xc00054f540) Create stream\nI0215 01:01:01.601419    4076 log.go:172] (0xc000a9c580) (0xc00054f540) Stream added, broadcasting: 1\nI0215 01:01:01.608188    4076 log.go:172] (0xc000a9c580) Reply frame received for 1\nI0215 01:01:01.608240    4076 log.go:172] (0xc000a9c580) (0xc0006c9c20) Create stream\nI0215 01:01:01.608257    4076 log.go:172] (0xc000a9c580) (0xc0006c9c20) Stream added, broadcasting: 3\nI0215 01:01:01.610535    4076 log.go:172] (0xc000a9c580) Reply frame received for 3\nI0215 01:01:01.610700    4076 log.go:172] (0xc000a9c580) (0xc0009d6000) Create stream\nI0215 01:01:01.610722    4076 log.go:172] (0xc000a9c580) (0xc0009d6000) Stream added, broadcasting: 5\nI0215 01:01:01.613252    4076 log.go:172] (0xc000a9c580) Reply frame received for 5\nI0215 01:01:01.695740    4076 log.go:172] (0xc000a9c580) Data frame received for 5\nI0215 01:01:01.695796    4076 log.go:172] (0xc0009d6000) (5) Data frame handling\nI0215 01:01:01.695819    4076 log.go:172] (0xc0009d6000) (5) Data frame sent\n+ nslookup nodeport-service\nI0215 01:01:01.753620    4076 log.go:172] (0xc000a9c580) Data frame received for 3\nI0215 01:01:01.753666    4076 log.go:172] (0xc0006c9c20) (3) Data frame handling\nI0215 01:01:01.753687    4076 log.go:172] (0xc0006c9c20) (3) Data frame sent\nI0215 01:01:01.755409    4076 log.go:172] (0xc000a9c580) Data frame received for 3\nI0215 01:01:01.755458    4076 log.go:172] (0xc0006c9c20) (3) Data frame handling\nI0215 01:01:01.755478    4076 log.go:172] (0xc0006c9c20) (3) Data frame sent\nI0215 01:01:01.831740    4076 log.go:172] (0xc000a9c580) Data frame received for 1\nI0215 01:01:01.831829    4076 log.go:172] (0xc00054f540) (1) Data frame handling\nI0215 01:01:01.831862    4076 log.go:172] (0xc00054f540) (1) Data frame sent\nI0215 01:01:01.832092    4076 log.go:172] (0xc000a9c580) (0xc0006c9c20) Stream removed, broadcasting: 3\nI0215 01:01:01.832128    4076 log.go:172] (0xc000a9c580) (0xc00054f540) Stream removed, broadcasting: 1\nI0215 01:01:01.832949    4076 log.go:172] (0xc000a9c580) (0xc0009d6000) Stream removed, broadcasting: 5\nI0215 01:01:01.833040    4076 log.go:172] (0xc000a9c580) Go away received\nI0215 01:01:01.833101    4076 log.go:172] (0xc000a9c580) (0xc00054f540) Stream removed, broadcasting: 1\nI0215 01:01:01.833118    4076 log.go:172] (0xc000a9c580) (0xc0006c9c20) Stream removed, broadcasting: 3\nI0215 01:01:01.833124    4076 log.go:172] (0xc000a9c580) (0xc0009d6000) Stream removed, broadcasting: 5\n"
Feb 15 01:01:01.840: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4830.svc.cluster.local\tcanonical name = externalsvc.services-4830.svc.cluster.local.\nName:\texternalsvc.services-4830.svc.cluster.local\nAddress: 10.96.163.206\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-4830, will wait for the garbage collector to delete the pods
Feb 15 01:01:01.908: INFO: Deleting ReplicationController externalsvc took: 11.594692ms
Feb 15 01:01:02.209: INFO: Terminating ReplicationController externalsvc pods took: 301.096067ms
Feb 15 01:01:11.379: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:01:11.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4830" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:25.466 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":280,"completed":166,"skipped":2767,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:01:11.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1899
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 15 01:01:11.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1096'
Feb 15 01:01:11.665: INFO: stderr: ""
Feb 15 01:01:11.665: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Feb 15 01:01:21.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-1096 -o json'
Feb 15 01:01:21.884: INFO: stderr: ""
Feb 15 01:01:21.885: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-15T01:01:11Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-1096\",\n        \"resourceVersion\": \"8490828\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-1096/pods/e2e-test-httpd-pod\",\n        \"uid\": \"c2164d3d-3480-4cf4-95ee-16ec71416638\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-m2rsh\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-m2rsh\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-m2rsh\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-15T01:01:11Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-15T01:01:18Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-15T01:01:18Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-15T01:01:11Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://0e1e640c63fe988b92079adb110474b76f3744177beb5798f65ac37682ca4f8b\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-15T01:01:18Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.2.250\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.44.0.1\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-15T01:01:11Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 15 01:01:21.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1096'
Feb 15 01:01:22.427: INFO: stderr: ""
Feb 15 01:01:22.428: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1904
Feb 15 01:01:22.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1096'
Feb 15 01:01:28.116: INFO: stderr: ""
Feb 15 01:01:28.117: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:01:28.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1096" for this suite.

• [SLOW TEST:16.706 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1895
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":280,"completed":167,"skipped":2772,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:01:28.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-58c05b13-6041-43ff-97e3-b2d1a26246fc
STEP: Creating a pod to test consume secrets
Feb 15 01:01:28.287: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4b781b51-bd3f-404a-8fa0-4cf3741a0e3d" in namespace "projected-2716" to be "success or failure"
Feb 15 01:01:28.293: INFO: Pod "pod-projected-secrets-4b781b51-bd3f-404a-8fa0-4cf3741a0e3d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.684161ms
Feb 15 01:01:30.302: INFO: Pod "pod-projected-secrets-4b781b51-bd3f-404a-8fa0-4cf3741a0e3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014422284s
Feb 15 01:01:32.311: INFO: Pod "pod-projected-secrets-4b781b51-bd3f-404a-8fa0-4cf3741a0e3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023199808s
Feb 15 01:01:34.320: INFO: Pod "pod-projected-secrets-4b781b51-bd3f-404a-8fa0-4cf3741a0e3d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032657277s
Feb 15 01:01:36.332: INFO: Pod "pod-projected-secrets-4b781b51-bd3f-404a-8fa0-4cf3741a0e3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044170607s
STEP: Saw pod success
Feb 15 01:01:36.332: INFO: Pod "pod-projected-secrets-4b781b51-bd3f-404a-8fa0-4cf3741a0e3d" satisfied condition "success or failure"
Feb 15 01:01:36.337: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-4b781b51-bd3f-404a-8fa0-4cf3741a0e3d container projected-secret-volume-test: 
STEP: delete the pod
Feb 15 01:01:36.417: INFO: Waiting for pod pod-projected-secrets-4b781b51-bd3f-404a-8fa0-4cf3741a0e3d to disappear
Feb 15 01:01:36.423: INFO: Pod pod-projected-secrets-4b781b51-bd3f-404a-8fa0-4cf3741a0e3d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:01:36.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2716" for this suite.

• [SLOW TEST:8.311 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":168,"skipped":2773,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:01:36.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 15 01:01:56.815: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4851 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 01:01:56.815: INFO: >>> kubeConfig: /root/.kube/config
I0215 01:01:56.888890      10 log.go:172] (0xc002b7f600) (0xc001f4fd60) Create stream
I0215 01:01:56.889099      10 log.go:172] (0xc002b7f600) (0xc001f4fd60) Stream added, broadcasting: 1
I0215 01:01:56.895352      10 log.go:172] (0xc002b7f600) Reply frame received for 1
I0215 01:01:56.895529      10 log.go:172] (0xc002b7f600) (0xc002ad8a00) Create stream
I0215 01:01:56.895557      10 log.go:172] (0xc002b7f600) (0xc002ad8a00) Stream added, broadcasting: 3
I0215 01:01:56.897978      10 log.go:172] (0xc002b7f600) Reply frame received for 3
I0215 01:01:56.898029      10 log.go:172] (0xc002b7f600) (0xc002b72140) Create stream
I0215 01:01:56.898044      10 log.go:172] (0xc002b7f600) (0xc002b72140) Stream added, broadcasting: 5
I0215 01:01:56.900950      10 log.go:172] (0xc002b7f600) Reply frame received for 5
I0215 01:01:57.019462      10 log.go:172] (0xc002b7f600) Data frame received for 3
I0215 01:01:57.019687      10 log.go:172] (0xc002ad8a00) (3) Data frame handling
I0215 01:01:57.019742      10 log.go:172] (0xc002ad8a00) (3) Data frame sent
I0215 01:01:57.136745      10 log.go:172] (0xc002b7f600) Data frame received for 1
I0215 01:01:57.136988      10 log.go:172] (0xc002b7f600) (0xc002b72140) Stream removed, broadcasting: 5
I0215 01:01:57.137042      10 log.go:172] (0xc001f4fd60) (1) Data frame handling
I0215 01:01:57.137061      10 log.go:172] (0xc001f4fd60) (1) Data frame sent
I0215 01:01:57.137251      10 log.go:172] (0xc002b7f600) (0xc002ad8a00) Stream removed, broadcasting: 3
I0215 01:01:57.137292      10 log.go:172] (0xc002b7f600) (0xc001f4fd60) Stream removed, broadcasting: 1
I0215 01:01:57.137318      10 log.go:172] (0xc002b7f600) Go away received
I0215 01:01:57.137774      10 log.go:172] (0xc002b7f600) (0xc001f4fd60) Stream removed, broadcasting: 1
I0215 01:01:57.137904      10 log.go:172] (0xc002b7f600) (0xc002ad8a00) Stream removed, broadcasting: 3
I0215 01:01:57.137912      10 log.go:172] (0xc002b7f600) (0xc002b72140) Stream removed, broadcasting: 5
Feb 15 01:01:57.137: INFO: Exec stderr: ""
Feb 15 01:01:57.138: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4851 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 01:01:57.138: INFO: >>> kubeConfig: /root/.kube/config
I0215 01:01:57.169774      10 log.go:172] (0xc002b7fc30) (0xc00216e0a0) Create stream
I0215 01:01:57.169862      10 log.go:172] (0xc002b7fc30) (0xc00216e0a0) Stream added, broadcasting: 1
I0215 01:01:57.172296      10 log.go:172] (0xc002b7fc30) Reply frame received for 1
I0215 01:01:57.172332      10 log.go:172] (0xc002b7fc30) (0xc0024b40a0) Create stream
I0215 01:01:57.172345      10 log.go:172] (0xc002b7fc30) (0xc0024b40a0) Stream added, broadcasting: 3
I0215 01:01:57.173289      10 log.go:172] (0xc002b7fc30) Reply frame received for 3
I0215 01:01:57.173307      10 log.go:172] (0xc002b7fc30) (0xc00216e1e0) Create stream
I0215 01:01:57.173317      10 log.go:172] (0xc002b7fc30) (0xc00216e1e0) Stream added, broadcasting: 5
I0215 01:01:57.174506      10 log.go:172] (0xc002b7fc30) Reply frame received for 5
I0215 01:01:57.245360      10 log.go:172] (0xc002b7fc30) Data frame received for 3
I0215 01:01:57.245490      10 log.go:172] (0xc0024b40a0) (3) Data frame handling
I0215 01:01:57.245512      10 log.go:172] (0xc0024b40a0) (3) Data frame sent
I0215 01:01:57.317400      10 log.go:172] (0xc002b7fc30) (0xc00216e1e0) Stream removed, broadcasting: 5
I0215 01:01:57.317503      10 log.go:172] (0xc002b7fc30) Data frame received for 1
I0215 01:01:57.317545      10 log.go:172] (0xc002b7fc30) (0xc0024b40a0) Stream removed, broadcasting: 3
I0215 01:01:57.317583      10 log.go:172] (0xc00216e0a0) (1) Data frame handling
I0215 01:01:57.317600      10 log.go:172] (0xc00216e0a0) (1) Data frame sent
I0215 01:01:57.317610      10 log.go:172] (0xc002b7fc30) (0xc00216e0a0) Stream removed, broadcasting: 1
I0215 01:01:57.317625      10 log.go:172] (0xc002b7fc30) Go away received
I0215 01:01:57.317922      10 log.go:172] (0xc002b7fc30) (0xc00216e0a0) Stream removed, broadcasting: 1
I0215 01:01:57.317933      10 log.go:172] (0xc002b7fc30) (0xc0024b40a0) Stream removed, broadcasting: 3
I0215 01:01:57.317939      10 log.go:172] (0xc002b7fc30) (0xc00216e1e0) Stream removed, broadcasting: 5
Feb 15 01:01:57.317: INFO: Exec stderr: ""
Feb 15 01:01:57.318: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4851 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 01:01:57.318: INFO: >>> kubeConfig: /root/.kube/config
I0215 01:01:57.366279      10 log.go:172] (0xc001b78420) (0xc002b72500) Create stream
I0215 01:01:57.366662      10 log.go:172] (0xc001b78420) (0xc002b72500) Stream added, broadcasting: 1
I0215 01:01:57.370982      10 log.go:172] (0xc001b78420) Reply frame received for 1
I0215 01:01:57.371044      10 log.go:172] (0xc001b78420) (0xc0018bdd60) Create stream
I0215 01:01:57.371055      10 log.go:172] (0xc001b78420) (0xc0018bdd60) Stream added, broadcasting: 3
I0215 01:01:57.373293      10 log.go:172] (0xc001b78420) Reply frame received for 3
I0215 01:01:57.373421      10 log.go:172] (0xc001b78420) (0xc0024b4820) Create stream
I0215 01:01:57.373437      10 log.go:172] (0xc001b78420) (0xc0024b4820) Stream added, broadcasting: 5
I0215 01:01:57.375207      10 log.go:172] (0xc001b78420) Reply frame received for 5
I0215 01:01:57.455773      10 log.go:172] (0xc001b78420) Data frame received for 3
I0215 01:01:57.455882      10 log.go:172] (0xc0018bdd60) (3) Data frame handling
I0215 01:01:57.455899      10 log.go:172] (0xc0018bdd60) (3) Data frame sent
I0215 01:01:57.532477      10 log.go:172] (0xc001b78420) (0xc0018bdd60) Stream removed, broadcasting: 3
I0215 01:01:57.532654      10 log.go:172] (0xc001b78420) Data frame received for 1
I0215 01:01:57.532690      10 log.go:172] (0xc002b72500) (1) Data frame handling
I0215 01:01:57.532717      10 log.go:172] (0xc002b72500) (1) Data frame sent
I0215 01:01:57.532736      10 log.go:172] (0xc001b78420) (0xc002b72500) Stream removed, broadcasting: 1
I0215 01:01:57.532777      10 log.go:172] (0xc001b78420) (0xc0024b4820) Stream removed, broadcasting: 5
I0215 01:01:57.532912      10 log.go:172] (0xc001b78420) Go away received
I0215 01:01:57.533096      10 log.go:172] (0xc001b78420) (0xc002b72500) Stream removed, broadcasting: 1
I0215 01:01:57.533117      10 log.go:172] (0xc001b78420) (0xc0018bdd60) Stream removed, broadcasting: 3
I0215 01:01:57.533130      10 log.go:172] (0xc001b78420) (0xc0024b4820) Stream removed, broadcasting: 5
Feb 15 01:01:57.533: INFO: Exec stderr: ""
Feb 15 01:01:57.533: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4851 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 01:01:57.533: INFO: >>> kubeConfig: /root/.kube/config
I0215 01:01:57.572027      10 log.go:172] (0xc001728b00) (0xc001bda1e0) Create stream
I0215 01:01:57.572255      10 log.go:172] (0xc001728b00) (0xc001bda1e0) Stream added, broadcasting: 1
I0215 01:01:57.575578      10 log.go:172] (0xc001728b00) Reply frame received for 1
I0215 01:01:57.575681      10 log.go:172] (0xc001728b00) (0xc002ad8aa0) Create stream
I0215 01:01:57.575699      10 log.go:172] (0xc001728b00) (0xc002ad8aa0) Stream added, broadcasting: 3
I0215 01:01:57.577937      10 log.go:172] (0xc001728b00) Reply frame received for 3
I0215 01:01:57.577958      10 log.go:172] (0xc001728b00) (0xc002ad8b40) Create stream
I0215 01:01:57.577970      10 log.go:172] (0xc001728b00) (0xc002ad8b40) Stream added, broadcasting: 5
I0215 01:01:57.579422      10 log.go:172] (0xc001728b00) Reply frame received for 5
I0215 01:01:57.647916      10 log.go:172] (0xc001728b00) Data frame received for 3
I0215 01:01:57.648059      10 log.go:172] (0xc002ad8aa0) (3) Data frame handling
I0215 01:01:57.648094      10 log.go:172] (0xc002ad8aa0) (3) Data frame sent
I0215 01:01:57.705662      10 log.go:172] (0xc001728b00) (0xc002ad8aa0) Stream removed, broadcasting: 3
I0215 01:01:57.705821      10 log.go:172] (0xc001728b00) Data frame received for 1
I0215 01:01:57.705849      10 log.go:172] (0xc001728b00) (0xc002ad8b40) Stream removed, broadcasting: 5
I0215 01:01:57.705872      10 log.go:172] (0xc001bda1e0) (1) Data frame handling
I0215 01:01:57.705889      10 log.go:172] (0xc001bda1e0) (1) Data frame sent
I0215 01:01:57.705898      10 log.go:172] (0xc001728b00) (0xc001bda1e0) Stream removed, broadcasting: 1
I0215 01:01:57.705912      10 log.go:172] (0xc001728b00) Go away received
I0215 01:01:57.706165      10 log.go:172] (0xc001728b00) (0xc001bda1e0) Stream removed, broadcasting: 1
I0215 01:01:57.706175      10 log.go:172] (0xc001728b00) (0xc002ad8aa0) Stream removed, broadcasting: 3
I0215 01:01:57.706180      10 log.go:172] (0xc001728b00) (0xc002ad8b40) Stream removed, broadcasting: 5
Feb 15 01:01:57.706: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 15 01:01:57.706: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4851 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 01:01:57.706: INFO: >>> kubeConfig: /root/.kube/config
I0215 01:01:57.743118      10 log.go:172] (0xc001dd04d0) (0xc002ad9040) Create stream
I0215 01:01:57.743324      10 log.go:172] (0xc001dd04d0) (0xc002ad9040) Stream added, broadcasting: 1
I0215 01:01:57.747972      10 log.go:172] (0xc001dd04d0) Reply frame received for 1
I0215 01:01:57.748055      10 log.go:172] (0xc001dd04d0) (0xc0024b4be0) Create stream
I0215 01:01:57.748069      10 log.go:172] (0xc001dd04d0) (0xc0024b4be0) Stream added, broadcasting: 3
I0215 01:01:57.750157      10 log.go:172] (0xc001dd04d0) Reply frame received for 3
I0215 01:01:57.750212      10 log.go:172] (0xc001dd04d0) (0xc00216e280) Create stream
I0215 01:01:57.750235      10 log.go:172] (0xc001dd04d0) (0xc00216e280) Stream added, broadcasting: 5
I0215 01:01:57.751769      10 log.go:172] (0xc001dd04d0) Reply frame received for 5
I0215 01:01:57.838261      10 log.go:172] (0xc001dd04d0) Data frame received for 3
I0215 01:01:57.838373      10 log.go:172] (0xc0024b4be0) (3) Data frame handling
I0215 01:01:57.838412      10 log.go:172] (0xc0024b4be0) (3) Data frame sent
I0215 01:01:57.906737      10 log.go:172] (0xc001dd04d0) (0xc0024b4be0) Stream removed, broadcasting: 3
I0215 01:01:57.906938      10 log.go:172] (0xc001dd04d0) (0xc00216e280) Stream removed, broadcasting: 5
I0215 01:01:57.907006      10 log.go:172] (0xc001dd04d0) Data frame received for 1
I0215 01:01:57.907035      10 log.go:172] (0xc002ad9040) (1) Data frame handling
I0215 01:01:57.907059      10 log.go:172] (0xc002ad9040) (1) Data frame sent
I0215 01:01:57.907078      10 log.go:172] (0xc001dd04d0) (0xc002ad9040) Stream removed, broadcasting: 1
I0215 01:01:57.907093      10 log.go:172] (0xc001dd04d0) Go away received
I0215 01:01:57.907371      10 log.go:172] (0xc001dd04d0) (0xc002ad9040) Stream removed, broadcasting: 1
I0215 01:01:57.907398      10 log.go:172] (0xc001dd04d0) (0xc0024b4be0) Stream removed, broadcasting: 3
I0215 01:01:57.907421      10 log.go:172] (0xc001dd04d0) (0xc00216e280) Stream removed, broadcasting: 5
Feb 15 01:01:57.907: INFO: Exec stderr: ""
Feb 15 01:01:57.907: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4851 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 01:01:57.907: INFO: >>> kubeConfig: /root/.kube/config
I0215 01:01:57.953205      10 log.go:172] (0xc001b78a50) (0xc002b72820) Create stream
I0215 01:01:57.953318      10 log.go:172] (0xc001b78a50) (0xc002b72820) Stream added, broadcasting: 1
I0215 01:01:57.956049      10 log.go:172] (0xc001b78a50) Reply frame received for 1
I0215 01:01:57.956140      10 log.go:172] (0xc001b78a50) (0xc002b728c0) Create stream
I0215 01:01:57.956148      10 log.go:172] (0xc001b78a50) (0xc002b728c0) Stream added, broadcasting: 3
I0215 01:01:57.957571      10 log.go:172] (0xc001b78a50) Reply frame received for 3
I0215 01:01:57.957633      10 log.go:172] (0xc001b78a50) (0xc002d4e000) Create stream
I0215 01:01:57.957648      10 log.go:172] (0xc001b78a50) (0xc002d4e000) Stream added, broadcasting: 5
I0215 01:01:57.959135      10 log.go:172] (0xc001b78a50) Reply frame received for 5
I0215 01:01:58.034665      10 log.go:172] (0xc001b78a50) Data frame received for 3
I0215 01:01:58.034723      10 log.go:172] (0xc002b728c0) (3) Data frame handling
I0215 01:01:58.034760      10 log.go:172] (0xc002b728c0) (3) Data frame sent
I0215 01:01:58.102247      10 log.go:172] (0xc001b78a50) Data frame received for 1
I0215 01:01:58.102375      10 log.go:172] (0xc001b78a50) (0xc002b728c0) Stream removed, broadcasting: 3
I0215 01:01:58.102444      10 log.go:172] (0xc002b72820) (1) Data frame handling
I0215 01:01:58.102465      10 log.go:172] (0xc002b72820) (1) Data frame sent
I0215 01:01:58.102475      10 log.go:172] (0xc001b78a50) (0xc002b72820) Stream removed, broadcasting: 1
I0215 01:01:58.104568      10 log.go:172] (0xc001b78a50) (0xc002d4e000) Stream removed, broadcasting: 5
I0215 01:01:58.104832      10 log.go:172] (0xc001b78a50) Go away received
I0215 01:01:58.104942      10 log.go:172] (0xc001b78a50) (0xc002b72820) Stream removed, broadcasting: 1
I0215 01:01:58.104972      10 log.go:172] (0xc001b78a50) (0xc002b728c0) Stream removed, broadcasting: 3
I0215 01:01:58.105012      10 log.go:172] (0xc001b78a50) (0xc002d4e000) Stream removed, broadcasting: 5
Feb 15 01:01:58.105: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 15 01:01:58.105: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4851 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 01:01:58.105: INFO: >>> kubeConfig: /root/.kube/config
I0215 01:01:58.156224      10 log.go:172] (0xc001b78fd0) (0xc002b72a00) Create stream
I0215 01:01:58.156334      10 log.go:172] (0xc001b78fd0) (0xc002b72a00) Stream added, broadcasting: 1
I0215 01:01:58.158366      10 log.go:172] (0xc001b78fd0) Reply frame received for 1
I0215 01:01:58.158385      10 log.go:172] (0xc001b78fd0) (0xc00216e320) Create stream
I0215 01:01:58.158391      10 log.go:172] (0xc001b78fd0) (0xc00216e320) Stream added, broadcasting: 3
I0215 01:01:58.159618      10 log.go:172] (0xc001b78fd0) Reply frame received for 3
I0215 01:01:58.159642      10 log.go:172] (0xc001b78fd0) (0xc001bda3c0) Create stream
I0215 01:01:58.159651      10 log.go:172] (0xc001b78fd0) (0xc001bda3c0) Stream added, broadcasting: 5
I0215 01:01:58.161125      10 log.go:172] (0xc001b78fd0) Reply frame received for 5
I0215 01:01:58.218290      10 log.go:172] (0xc001b78fd0) Data frame received for 3
I0215 01:01:58.218384      10 log.go:172] (0xc00216e320) (3) Data frame handling
I0215 01:01:58.218408      10 log.go:172] (0xc00216e320) (3) Data frame sent
I0215 01:01:58.292203      10 log.go:172] (0xc001b78fd0) (0xc00216e320) Stream removed, broadcasting: 3
I0215 01:01:58.292490      10 log.go:172] (0xc001b78fd0) Data frame received for 1
I0215 01:01:58.292557      10 log.go:172] (0xc002b72a00) (1) Data frame handling
I0215 01:01:58.292591      10 log.go:172] (0xc002b72a00) (1) Data frame sent
I0215 01:01:58.292610      10 log.go:172] (0xc001b78fd0) (0xc001bda3c0) Stream removed, broadcasting: 5
I0215 01:01:58.292691      10 log.go:172] (0xc001b78fd0) (0xc002b72a00) Stream removed, broadcasting: 1
I0215 01:01:58.293135      10 log.go:172] (0xc001b78fd0) Go away received
I0215 01:01:58.293493      10 log.go:172] (0xc001b78fd0) (0xc002b72a00) Stream removed, broadcasting: 1
I0215 01:01:58.293568      10 log.go:172] (0xc001b78fd0) (0xc00216e320) Stream removed, broadcasting: 3
I0215 01:01:58.293586      10 log.go:172] (0xc001b78fd0) (0xc001bda3c0) Stream removed, broadcasting: 5
Feb 15 01:01:58.293: INFO: Exec stderr: ""
Feb 15 01:01:58.293: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4851 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 01:01:58.293: INFO: >>> kubeConfig: /root/.kube/config
I0215 01:01:58.341232      10 log.go:172] (0xc001c3e000) (0xc002d4e3c0) Create stream
I0215 01:01:58.341350      10 log.go:172] (0xc001c3e000) (0xc002d4e3c0) Stream added, broadcasting: 1
I0215 01:01:58.345537      10 log.go:172] (0xc001c3e000) Reply frame received for 1
I0215 01:01:58.345572      10 log.go:172] (0xc001c3e000) (0xc001bda5a0) Create stream
I0215 01:01:58.345581      10 log.go:172] (0xc001c3e000) (0xc001bda5a0) Stream added, broadcasting: 3
I0215 01:01:58.346989      10 log.go:172] (0xc001c3e000) Reply frame received for 3
I0215 01:01:58.347019      10 log.go:172] (0xc001c3e000) (0xc002b72b40) Create stream
I0215 01:01:58.347030      10 log.go:172] (0xc001c3e000) (0xc002b72b40) Stream added, broadcasting: 5
I0215 01:01:58.348669      10 log.go:172] (0xc001c3e000) Reply frame received for 5
I0215 01:01:58.435893      10 log.go:172] (0xc001c3e000) Data frame received for 3
I0215 01:01:58.436327      10 log.go:172] (0xc001bda5a0) (3) Data frame handling
I0215 01:01:58.436408      10 log.go:172] (0xc001bda5a0) (3) Data frame sent
I0215 01:01:58.526595      10 log.go:172] (0xc001c3e000) Data frame received for 1
I0215 01:01:58.527059      10 log.go:172] (0xc001c3e000) (0xc002b72b40) Stream removed, broadcasting: 5
I0215 01:01:58.527183      10 log.go:172] (0xc002d4e3c0) (1) Data frame handling
I0215 01:01:58.527337      10 log.go:172] (0xc002d4e3c0) (1) Data frame sent
I0215 01:01:58.527379      10 log.go:172] (0xc001c3e000) (0xc001bda5a0) Stream removed, broadcasting: 3
I0215 01:01:58.527442      10 log.go:172] (0xc001c3e000) (0xc002d4e3c0) Stream removed, broadcasting: 1
I0215 01:01:58.527506      10 log.go:172] (0xc001c3e000) Go away received
I0215 01:01:58.528016      10 log.go:172] (0xc001c3e000) (0xc002d4e3c0) Stream removed, broadcasting: 1
I0215 01:01:58.528067      10 log.go:172] (0xc001c3e000) (0xc001bda5a0) Stream removed, broadcasting: 3
I0215 01:01:58.528114      10 log.go:172] (0xc001c3e000) (0xc002b72b40) Stream removed, broadcasting: 5
Feb 15 01:01:58.528: INFO: Exec stderr: ""
Feb 15 01:01:58.528: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4851 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 01:01:58.528: INFO: >>> kubeConfig: /root/.kube/config
I0215 01:01:58.578744      10 log.go:172] (0xc001b79550) (0xc002b72c80) Create stream
I0215 01:01:58.578993      10 log.go:172] (0xc001b79550) (0xc002b72c80) Stream added, broadcasting: 1
I0215 01:01:58.584335      10 log.go:172] (0xc001b79550) Reply frame received for 1
I0215 01:01:58.584399      10 log.go:172] (0xc001b79550) (0xc002d4e500) Create stream
I0215 01:01:58.584410      10 log.go:172] (0xc001b79550) (0xc002d4e500) Stream added, broadcasting: 3
I0215 01:01:58.585804      10 log.go:172] (0xc001b79550) Reply frame received for 3
I0215 01:01:58.585830      10 log.go:172] (0xc001b79550) (0xc00216e460) Create stream
I0215 01:01:58.585844      10 log.go:172] (0xc001b79550) (0xc00216e460) Stream added, broadcasting: 5
I0215 01:01:58.587686      10 log.go:172] (0xc001b79550) Reply frame received for 5
I0215 01:01:58.675735      10 log.go:172] (0xc001b79550) Data frame received for 3
I0215 01:01:58.676068      10 log.go:172] (0xc002d4e500) (3) Data frame handling
I0215 01:01:58.676177      10 log.go:172] (0xc002d4e500) (3) Data frame sent
I0215 01:01:58.763810      10 log.go:172] (0xc001b79550) Data frame received for 1
I0215 01:01:58.763956      10 log.go:172] (0xc002b72c80) (1) Data frame handling
I0215 01:01:58.763991      10 log.go:172] (0xc002b72c80) (1) Data frame sent
I0215 01:01:58.764225      10 log.go:172] (0xc001b79550) (0xc002b72c80) Stream removed, broadcasting: 1
I0215 01:01:58.764846      10 log.go:172] (0xc001b79550) (0xc002d4e500) Stream removed, broadcasting: 3
I0215 01:01:58.765036      10 log.go:172] (0xc001b79550) (0xc00216e460) Stream removed, broadcasting: 5
I0215 01:01:58.765072      10 log.go:172] (0xc001b79550) Go away received
I0215 01:01:58.765125      10 log.go:172] (0xc001b79550) (0xc002b72c80) Stream removed, broadcasting: 1
I0215 01:01:58.765135      10 log.go:172] (0xc001b79550) (0xc002d4e500) Stream removed, broadcasting: 3
I0215 01:01:58.765142      10 log.go:172] (0xc001b79550) (0xc00216e460) Stream removed, broadcasting: 5
Feb 15 01:01:58.765: INFO: Exec stderr: ""
Feb 15 01:01:58.765: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4851 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 01:01:58.765: INFO: >>> kubeConfig: /root/.kube/config
I0215 01:01:58.800483      10 log.go:172] (0xc001eda2c0) (0xc00216e640) Create stream
I0215 01:01:58.800692      10 log.go:172] (0xc001eda2c0) (0xc00216e640) Stream added, broadcasting: 1
I0215 01:01:58.805038      10 log.go:172] (0xc001eda2c0) Reply frame received for 1
I0215 01:01:58.805090      10 log.go:172] (0xc001eda2c0) (0xc002ad9180) Create stream
I0215 01:01:58.805103      10 log.go:172] (0xc001eda2c0) (0xc002ad9180) Stream added, broadcasting: 3
I0215 01:01:58.806844      10 log.go:172] (0xc001eda2c0) Reply frame received for 3
I0215 01:01:58.806872      10 log.go:172] (0xc001eda2c0) (0xc002ad9220) Create stream
I0215 01:01:58.806882      10 log.go:172] (0xc001eda2c0) (0xc002ad9220) Stream added, broadcasting: 5
I0215 01:01:58.808133      10 log.go:172] (0xc001eda2c0) Reply frame received for 5
I0215 01:01:58.875049      10 log.go:172] (0xc001eda2c0) Data frame received for 3
I0215 01:01:58.875193      10 log.go:172] (0xc002ad9180) (3) Data frame handling
I0215 01:01:58.875225      10 log.go:172] (0xc002ad9180) (3) Data frame sent
I0215 01:01:58.951627      10 log.go:172] (0xc001eda2c0) Data frame received for 1
I0215 01:01:58.951718      10 log.go:172] (0xc001eda2c0) (0xc002ad9180) Stream removed, broadcasting: 3
I0215 01:01:58.951798      10 log.go:172] (0xc00216e640) (1) Data frame handling
I0215 01:01:58.951823      10 log.go:172] (0xc00216e640) (1) Data frame sent
I0215 01:01:58.951858      10 log.go:172] (0xc001eda2c0) (0xc002ad9220) Stream removed, broadcasting: 5
I0215 01:01:58.951901      10 log.go:172] (0xc001eda2c0) (0xc00216e640) Stream removed, broadcasting: 1
I0215 01:01:58.951931      10 log.go:172] (0xc001eda2c0) Go away received
I0215 01:01:58.952466      10 log.go:172] (0xc001eda2c0) (0xc00216e640) Stream removed, broadcasting: 1
I0215 01:01:58.952484      10 log.go:172] (0xc001eda2c0) (0xc002ad9180) Stream removed, broadcasting: 3
I0215 01:01:58.952494      10 log.go:172] (0xc001eda2c0) (0xc002ad9220) Stream removed, broadcasting: 5
Feb 15 01:01:58.952: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:01:58.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-4851" for this suite.

• [SLOW TEST:22.528 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":169,"skipped":2788,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:01:58.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:01:59.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5153" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":280,"completed":170,"skipped":2896,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:01:59.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: validating cluster-info
Feb 15 01:01:59.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 15 01:01:59.413: INFO: stderr: ""
Feb 15 01:01:59.413: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:01:59.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5000" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":280,"completed":171,"skipped":2927,"failed":0}
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:02:03.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-3885
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 15 01:02:03.540: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 15 01:02:03.590: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 01:02:05.788: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 01:02:07.624: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 01:02:10.881: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 01:02:11.837: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 01:02:13.597: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 01:02:15.598: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 01:02:17.598: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 01:02:19.595: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 01:02:21.597: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 01:02:23.600: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 01:02:25.598: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 01:02:27.599: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 01:02:29.595: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 01:02:31.596: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 01:02:33.600: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 15 01:02:33.608: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 15 01:02:45.791: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.2 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3885 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 01:02:45.791: INFO: >>> kubeConfig: /root/.kube/config
I0215 01:02:45.881945      10 log.go:172] (0xc001eda790) (0xc00216fb80) Create stream
I0215 01:02:45.882265      10 log.go:172] (0xc001eda790) (0xc00216fb80) Stream added, broadcasting: 1
I0215 01:02:45.891734      10 log.go:172] (0xc001eda790) Reply frame received for 1
I0215 01:02:45.891988      10 log.go:172] (0xc001eda790) (0xc00216fd60) Create stream
I0215 01:02:45.892012      10 log.go:172] (0xc001eda790) (0xc00216fd60) Stream added, broadcasting: 3
I0215 01:02:45.895666      10 log.go:172] (0xc001eda790) Reply frame received for 3
I0215 01:02:45.895730      10 log.go:172] (0xc001eda790) (0xc001bdbea0) Create stream
I0215 01:02:45.895775      10 log.go:172] (0xc001eda790) (0xc001bdbea0) Stream added, broadcasting: 5
I0215 01:02:45.897985      10 log.go:172] (0xc001eda790) Reply frame received for 5
I0215 01:02:47.141882      10 log.go:172] (0xc001eda790) Data frame received for 3
I0215 01:02:47.141969      10 log.go:172] (0xc00216fd60) (3) Data frame handling
I0215 01:02:47.142001      10 log.go:172] (0xc00216fd60) (3) Data frame sent
I0215 01:02:47.236360      10 log.go:172] (0xc001eda790) (0xc00216fd60) Stream removed, broadcasting: 3
I0215 01:02:47.236430      10 log.go:172] (0xc001eda790) Data frame received for 1
I0215 01:02:47.236435      10 log.go:172] (0xc00216fb80) (1) Data frame handling
I0215 01:02:47.236445      10 log.go:172] (0xc00216fb80) (1) Data frame sent
I0215 01:02:47.236455      10 log.go:172] (0xc001eda790) (0xc00216fb80) Stream removed, broadcasting: 1
I0215 01:02:47.236514      10 log.go:172] (0xc001eda790) (0xc001bdbea0) Stream removed, broadcasting: 5
I0215 01:02:47.236523      10 log.go:172] (0xc001eda790) Go away received
I0215 01:02:47.236982      10 log.go:172] (0xc001eda790) (0xc00216fb80) Stream removed, broadcasting: 1
I0215 01:02:47.237009      10 log.go:172] (0xc001eda790) (0xc00216fd60) Stream removed, broadcasting: 3
I0215 01:02:47.237023      10 log.go:172] (0xc001eda790) (0xc001bdbea0) Stream removed, broadcasting: 5
Feb 15 01:02:47.237: INFO: Found all expected endpoints: [netserver-0]
Feb 15 01:02:47.243: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3885 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 01:02:47.243: INFO: >>> kubeConfig: /root/.kube/config
I0215 01:02:47.301303      10 log.go:172] (0xc001dd0bb0) (0xc002e0e820) Create stream
I0215 01:02:47.301437      10 log.go:172] (0xc001dd0bb0) (0xc002e0e820) Stream added, broadcasting: 1
I0215 01:02:47.306013      10 log.go:172] (0xc001dd0bb0) Reply frame received for 1
I0215 01:02:47.306055      10 log.go:172] (0xc001dd0bb0) (0xc00241d360) Create stream
I0215 01:02:47.306069      10 log.go:172] (0xc001dd0bb0) (0xc00241d360) Stream added, broadcasting: 3
I0215 01:02:47.307712      10 log.go:172] (0xc001dd0bb0) Reply frame received for 3
I0215 01:02:47.307759      10 log.go:172] (0xc001dd0bb0) (0xc00241d400) Create stream
I0215 01:02:47.307770      10 log.go:172] (0xc001dd0bb0) (0xc00241d400) Stream added, broadcasting: 5
I0215 01:02:47.309480      10 log.go:172] (0xc001dd0bb0) Reply frame received for 5
I0215 01:02:48.414926      10 log.go:172] (0xc001dd0bb0) Data frame received for 3
I0215 01:02:48.415025      10 log.go:172] (0xc00241d360) (3) Data frame handling
I0215 01:02:48.415063      10 log.go:172] (0xc00241d360) (3) Data frame sent
I0215 01:02:48.539807      10 log.go:172] (0xc001dd0bb0) (0xc00241d360) Stream removed, broadcasting: 3
I0215 01:02:48.540251      10 log.go:172] (0xc001dd0bb0) Data frame received for 1
I0215 01:02:48.540507      10 log.go:172] (0xc001dd0bb0) (0xc00241d400) Stream removed, broadcasting: 5
I0215 01:02:48.540617      10 log.go:172] (0xc002e0e820) (1) Data frame handling
I0215 01:02:48.540702      10 log.go:172] (0xc002e0e820) (1) Data frame sent
I0215 01:02:48.540761      10 log.go:172] (0xc001dd0bb0) (0xc002e0e820) Stream removed, broadcasting: 1
I0215 01:02:48.540802      10 log.go:172] (0xc001dd0bb0) Go away received
I0215 01:02:48.541378      10 log.go:172] (0xc001dd0bb0) (0xc002e0e820) Stream removed, broadcasting: 1
I0215 01:02:48.541395      10 log.go:172] (0xc001dd0bb0) (0xc00241d360) Stream removed, broadcasting: 3
I0215 01:02:48.541400      10 log.go:172] (0xc001dd0bb0) (0xc00241d400) Stream removed, broadcasting: 5
Feb 15 01:02:48.541: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:02:48.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3885" for this suite.

• [SLOW TEST:45.424 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":172,"skipped":2934,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:02:48.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating Agnhost RC
Feb 15 01:02:48.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5392'
Feb 15 01:02:49.037: INFO: stderr: ""
Feb 15 01:02:49.038: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 15 01:02:50.048: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 01:02:50.048: INFO: Found 0 / 1
Feb 15 01:02:51.051: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 01:02:51.051: INFO: Found 0 / 1
Feb 15 01:02:52.069: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 01:02:52.069: INFO: Found 0 / 1
Feb 15 01:02:53.043: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 01:02:53.043: INFO: Found 0 / 1
Feb 15 01:02:54.085: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 01:02:54.086: INFO: Found 0 / 1
Feb 15 01:02:56.009: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 01:02:56.009: INFO: Found 0 / 1
Feb 15 01:02:56.895: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 01:02:56.895: INFO: Found 0 / 1
Feb 15 01:02:57.150: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 01:02:57.150: INFO: Found 0 / 1
Feb 15 01:02:58.045: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 01:02:58.046: INFO: Found 1 / 1
Feb 15 01:02:58.046: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 15 01:02:58.050: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 01:02:58.050: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 15 01:02:58.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-p4x4z --namespace=kubectl-5392 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 15 01:02:58.244: INFO: stderr: ""
Feb 15 01:02:58.244: INFO: stdout: "pod/agnhost-master-p4x4z patched\n"
STEP: checking annotations
Feb 15 01:02:58.256: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 15 01:02:58.256: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:02:58.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5392" for this suite.

• [SLOW TEST:9.833 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":280,"completed":173,"skipped":2943,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:02:58.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating secret secrets-5459/secret-test-95399f39-b6b9-4a9f-af13-5c4039edf8fe
STEP: Creating a pod to test consume secrets
Feb 15 01:03:00.520: INFO: Waiting up to 5m0s for pod "pod-configmaps-b1545f32-3bf2-4aff-9d2d-df06d9ba8015" in namespace "secrets-5459" to be "success or failure"
Feb 15 01:03:00.750: INFO: Pod "pod-configmaps-b1545f32-3bf2-4aff-9d2d-df06d9ba8015": Phase="Pending", Reason="", readiness=false. Elapsed: 229.572155ms
Feb 15 01:03:02.761: INFO: Pod "pod-configmaps-b1545f32-3bf2-4aff-9d2d-df06d9ba8015": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241165126s
Feb 15 01:03:04.801: INFO: Pod "pod-configmaps-b1545f32-3bf2-4aff-9d2d-df06d9ba8015": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280960676s
Feb 15 01:03:06.809: INFO: Pod "pod-configmaps-b1545f32-3bf2-4aff-9d2d-df06d9ba8015": Phase="Pending", Reason="", readiness=false. Elapsed: 6.288663948s
Feb 15 01:03:08.814: INFO: Pod "pod-configmaps-b1545f32-3bf2-4aff-9d2d-df06d9ba8015": Phase="Pending", Reason="", readiness=false. Elapsed: 8.294221921s
Feb 15 01:03:10.848: INFO: Pod "pod-configmaps-b1545f32-3bf2-4aff-9d2d-df06d9ba8015": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.327486962s
STEP: Saw pod success
Feb 15 01:03:10.848: INFO: Pod "pod-configmaps-b1545f32-3bf2-4aff-9d2d-df06d9ba8015" satisfied condition "success or failure"
Feb 15 01:03:10.971: INFO: Trying to get logs from node jerma-node pod pod-configmaps-b1545f32-3bf2-4aff-9d2d-df06d9ba8015 container env-test: 
STEP: delete the pod
Feb 15 01:03:11.052: INFO: Waiting for pod pod-configmaps-b1545f32-3bf2-4aff-9d2d-df06d9ba8015 to disappear
Feb 15 01:03:11.097: INFO: Pod pod-configmaps-b1545f32-3bf2-4aff-9d2d-df06d9ba8015 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:03:11.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5459" for this suite.

• [SLOW TEST:12.699 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":174,"skipped":2949,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:03:11.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 01:03:11.179: INFO: Creating deployment "test-recreate-deployment"
Feb 15 01:03:11.286: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 15 01:03:11.319: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb 15 01:03:13.338: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 15 01:03:13.343: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325391, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325391, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325391, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325391, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:03:15.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325391, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325391, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325391, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325391, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:03:17.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325391, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325391, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325391, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325391, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:03:19.349: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 15 01:03:19.360: INFO: Updating deployment test-recreate-deployment
Feb 15 01:03:19.360: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 15 01:03:19.871: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-1190 /apis/apps/v1/namespaces/deployment-1190/deployments/test-recreate-deployment a00a6e29-c550-4a1b-8b68-4c53112e0824 8491375 2 2020-02-15 01:03:11 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00414c8e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-15 01:03:19 +0000 UTC,LastTransitionTime:2020-02-15 01:03:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-02-15 01:03:19 +0000 UTC,LastTransitionTime:2020-02-15 01:03:11 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Feb 15 01:03:19.879: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-1190 /apis/apps/v1/namespaces/deployment-1190/replicasets/test-recreate-deployment-5f94c574ff 56086577-4384-450e-821d-446ea27cd7c4 8491372 1 2020-02-15 01:03:19 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment a00a6e29-c550-4a1b-8b68-4c53112e0824 0xc0030fc0f7 0xc0030fc0f8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030fc158  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 15 01:03:19.879: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 15 01:03:19.879: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-1190 /apis/apps/v1/namespaces/deployment-1190/replicasets/test-recreate-deployment-799c574856 505c7ea4-8bc8-4290-a1d1-da8a2a74fc96 8491363 2 2020-02-15 01:03:11 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment a00a6e29-c550-4a1b-8b68-4c53112e0824 0xc0030fc1c7 0xc0030fc1c8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030fc238  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 15 01:03:19.884: INFO: Pod "test-recreate-deployment-5f94c574ff-g64xp" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-g64xp test-recreate-deployment-5f94c574ff- deployment-1190 /api/v1/namespaces/deployment-1190/pods/test-recreate-deployment-5f94c574ff-g64xp 8e78b022-58cd-494d-93af-8c59af70278c 8491374 0 2020-02-15 01:03:19 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 56086577-4384-450e-821d-446ea27cd7c4 0xc003b68127 0xc003b68128}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srjj9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srjj9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srjj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 01:03:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 01:03:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 01:03:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 01:03:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-15 01:03:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:03:19.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1190" for this suite.

• [SLOW TEST:8.787 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":175,"skipped":2957,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:03:19.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 15 01:03:20.133: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4740ed69-2f02-4e5d-bf8d-979f86515186" in namespace "downward-api-9899" to be "success or failure"
Feb 15 01:03:20.145: INFO: Pod "downwardapi-volume-4740ed69-2f02-4e5d-bf8d-979f86515186": Phase="Pending", Reason="", readiness=false. Elapsed: 11.882744ms
Feb 15 01:03:22.153: INFO: Pod "downwardapi-volume-4740ed69-2f02-4e5d-bf8d-979f86515186": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019544535s
Feb 15 01:03:24.159: INFO: Pod "downwardapi-volume-4740ed69-2f02-4e5d-bf8d-979f86515186": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025387467s
Feb 15 01:03:26.176: INFO: Pod "downwardapi-volume-4740ed69-2f02-4e5d-bf8d-979f86515186": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042242794s
Feb 15 01:03:28.183: INFO: Pod "downwardapi-volume-4740ed69-2f02-4e5d-bf8d-979f86515186": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04976599s
Feb 15 01:03:30.191: INFO: Pod "downwardapi-volume-4740ed69-2f02-4e5d-bf8d-979f86515186": Phase="Pending", Reason="", readiness=false. Elapsed: 10.057351638s
Feb 15 01:03:32.557: INFO: Pod "downwardapi-volume-4740ed69-2f02-4e5d-bf8d-979f86515186": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.423548186s
STEP: Saw pod success
Feb 15 01:03:32.557: INFO: Pod "downwardapi-volume-4740ed69-2f02-4e5d-bf8d-979f86515186" satisfied condition "success or failure"
Feb 15 01:03:32.572: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-4740ed69-2f02-4e5d-bf8d-979f86515186 container client-container: 
STEP: delete the pod
Feb 15 01:03:32.838: INFO: Waiting for pod downwardapi-volume-4740ed69-2f02-4e5d-bf8d-979f86515186 to disappear
Feb 15 01:03:32.845: INFO: Pod downwardapi-volume-4740ed69-2f02-4e5d-bf8d-979f86515186 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:03:32.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9899" for this suite.

• [SLOW TEST:12.958 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":176,"skipped":2969,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:03:32.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 15 01:03:40.984: INFO: &Pod{ObjectMeta:{send-events-2567117f-b0ae-4acb-86aa-fda24181b2ba  events-4419 /api/v1/namespaces/events-4419/pods/send-events-2567117f-b0ae-4acb-86aa-fda24181b2ba 3b2d3daf-8781-4d82-8f14-79c3a136a4a0 8491480 0 2020-02-15 01:03:32 +0000 UTC   map[name:foo time:941420273] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mxhlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mxhlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mxhlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 01:03:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 01:03:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 01:03:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 01:03:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-15 01:03:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-15 01:03:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://7c2c70d1f72bdf7ecb73b0d8fc92bfc411a350deee1b9effed85ba4fe4219229,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Feb 15 01:03:43.001: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 15 01:03:45.009: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:03:45.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-4419" for this suite.

• [SLOW TEST:12.235 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":280,"completed":177,"skipped":2984,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:03:45.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1384
STEP: creating the pod
Feb 15 01:03:45.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8065'
Feb 15 01:03:45.686: INFO: stderr: ""
Feb 15 01:03:45.686: INFO: stdout: "pod/pause created\n"
Feb 15 01:03:45.686: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb 15 01:03:45.686: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8065" to be "running and ready"
Feb 15 01:03:45.691: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.371341ms
Feb 15 01:03:47.714: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027532518s
Feb 15 01:03:49.724: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037719301s
Feb 15 01:03:51.733: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046846752s
Feb 15 01:03:53.744: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.058320543s
Feb 15 01:03:53.745: INFO: Pod "pause" satisfied condition "running and ready"
Feb 15 01:03:53.745: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: adding the label testing-label with value testing-label-value to a pod
Feb 15 01:03:53.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8065'
Feb 15 01:03:53.902: INFO: stderr: ""
Feb 15 01:03:53.902: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb 15 01:03:53.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8065'
Feb 15 01:03:54.043: INFO: stderr: ""
Feb 15 01:03:54.044: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb 15 01:03:54.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8065'
Feb 15 01:03:54.200: INFO: stderr: ""
Feb 15 01:03:54.201: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb 15 01:03:54.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8065'
Feb 15 01:03:54.323: INFO: stderr: ""
Feb 15 01:03:54.323: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391
STEP: using delete to clean up resources
Feb 15 01:03:54.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8065'
Feb 15 01:03:54.468: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 15 01:03:54.468: INFO: stdout: "pod \"pause\" force deleted\n"
Feb 15 01:03:54.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8065'
Feb 15 01:03:54.630: INFO: stderr: "No resources found in kubectl-8065 namespace.\n"
Feb 15 01:03:54.630: INFO: stdout: ""
Feb 15 01:03:54.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8065 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 15 01:03:54.720: INFO: stderr: ""
Feb 15 01:03:54.720: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:03:54.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8065" for this suite.

• [SLOW TEST:9.639 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1381
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":280,"completed":178,"skipped":3041,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:03:54.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 15 01:04:06.059: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:04:07.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9931" for this suite.

• [SLOW TEST:12.352 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":280,"completed":179,"skipped":3054,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:04:07.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting the proxy server
Feb 15 01:04:07.284: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:04:07.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8238" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":280,"completed":180,"skipped":3059,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:04:07.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service externalname-service with the type=ExternalName in namespace services-3687
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-3687
I0215 01:04:07.683097      10 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3687, replica count: 2
I0215 01:04:10.734585      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:04:13.735752      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:04:16.736609      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:04:19.737161      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:04:22.738020      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:04:25.738914      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 15 01:04:25.739: INFO: Creating new exec pod
Feb 15 01:04:36.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3687 execpod75qln -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Feb 15 01:04:37.252: INFO: stderr: "I0215 01:04:37.064125    4388 log.go:172] (0xc000afb3f0) (0xc000a066e0) Create stream\nI0215 01:04:37.064660    4388 log.go:172] (0xc000afb3f0) (0xc000a066e0) Stream added, broadcasting: 1\nI0215 01:04:37.079843    4388 log.go:172] (0xc000afb3f0) Reply frame received for 1\nI0215 01:04:37.079909    4388 log.go:172] (0xc000afb3f0) (0xc000538640) Create stream\nI0215 01:04:37.079926    4388 log.go:172] (0xc000afb3f0) (0xc000538640) Stream added, broadcasting: 3\nI0215 01:04:37.081989    4388 log.go:172] (0xc000afb3f0) Reply frame received for 3\nI0215 01:04:37.082018    4388 log.go:172] (0xc000afb3f0) (0xc0007e12c0) Create stream\nI0215 01:04:37.082030    4388 log.go:172] (0xc000afb3f0) (0xc0007e12c0) Stream added, broadcasting: 5\nI0215 01:04:37.083193    4388 log.go:172] (0xc000afb3f0) Reply frame received for 5\nI0215 01:04:37.152682    4388 log.go:172] (0xc000afb3f0) Data frame received for 5\nI0215 01:04:37.152756    4388 log.go:172] (0xc0007e12c0) (5) Data frame handling\nI0215 01:04:37.152772    4388 log.go:172] (0xc0007e12c0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0215 01:04:37.158595    4388 log.go:172] (0xc000afb3f0) Data frame received for 5\nI0215 01:04:37.158614    4388 log.go:172] (0xc0007e12c0) (5) Data frame handling\nI0215 01:04:37.158621    4388 log.go:172] (0xc0007e12c0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0215 01:04:37.239530    4388 log.go:172] (0xc000afb3f0) (0xc000538640) Stream removed, broadcasting: 3\nI0215 01:04:37.239635    4388 log.go:172] (0xc000afb3f0) Data frame received for 1\nI0215 01:04:37.239674    4388 log.go:172] (0xc000a066e0) (1) Data frame handling\nI0215 01:04:37.239695    4388 log.go:172] (0xc000a066e0) (1) Data frame sent\nI0215 01:04:37.239711    4388 log.go:172] (0xc000afb3f0) (0xc000a066e0) Stream removed, broadcasting: 1\nI0215 01:04:37.239839    4388 log.go:172] (0xc000afb3f0) (0xc0007e12c0) Stream removed, broadcasting: 5\nI0215 01:04:37.240225    4388 log.go:172] (0xc000afb3f0) Go away received\nI0215 01:04:37.241003    4388 log.go:172] (0xc000afb3f0) (0xc000a066e0) Stream removed, broadcasting: 1\nI0215 01:04:37.241028    4388 log.go:172] (0xc000afb3f0) (0xc000538640) Stream removed, broadcasting: 3\nI0215 01:04:37.241059    4388 log.go:172] (0xc000afb3f0) (0xc0007e12c0) Stream removed, broadcasting: 5\n"
Feb 15 01:04:37.252: INFO: stdout: ""
Feb 15 01:04:37.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3687 execpod75qln -- /bin/sh -x -c nc -zv -t -w 2 10.96.44.147 80'
Feb 15 01:04:37.535: INFO: stderr: "I0215 01:04:37.395797    4408 log.go:172] (0xc000b1adc0) (0xc0009e25a0) Create stream\nI0215 01:04:37.396061    4408 log.go:172] (0xc000b1adc0) (0xc0009e25a0) Stream added, broadcasting: 1\nI0215 01:04:37.400046    4408 log.go:172] (0xc000b1adc0) Reply frame received for 1\nI0215 01:04:37.400118    4408 log.go:172] (0xc000b1adc0) (0xc0009e2640) Create stream\nI0215 01:04:37.400137    4408 log.go:172] (0xc000b1adc0) (0xc0009e2640) Stream added, broadcasting: 3\nI0215 01:04:37.402400    4408 log.go:172] (0xc000b1adc0) Reply frame received for 3\nI0215 01:04:37.402430    4408 log.go:172] (0xc000b1adc0) (0xc0005ee6e0) Create stream\nI0215 01:04:37.402438    4408 log.go:172] (0xc000b1adc0) (0xc0005ee6e0) Stream added, broadcasting: 5\nI0215 01:04:37.403813    4408 log.go:172] (0xc000b1adc0) Reply frame received for 5\nI0215 01:04:37.461614    4408 log.go:172] (0xc000b1adc0) Data frame received for 5\nI0215 01:04:37.461642    4408 log.go:172] (0xc0005ee6e0) (5) Data frame handling\nI0215 01:04:37.461652    4408 log.go:172] (0xc0005ee6e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.44.147 80\nConnection to 10.96.44.147 80 port [tcp/http] succeeded!\nI0215 01:04:37.529349    4408 log.go:172] (0xc000b1adc0) (0xc0009e2640) Stream removed, broadcasting: 3\nI0215 01:04:37.529743    4408 log.go:172] (0xc000b1adc0) Data frame received for 1\nI0215 01:04:37.529902    4408 log.go:172] (0xc000b1adc0) (0xc0005ee6e0) Stream removed, broadcasting: 5\nI0215 01:04:37.529926    4408 log.go:172] (0xc0009e25a0) (1) Data frame handling\nI0215 01:04:37.529957    4408 log.go:172] (0xc0009e25a0) (1) Data frame sent\nI0215 01:04:37.529967    4408 log.go:172] (0xc000b1adc0) (0xc0009e25a0) Stream removed, broadcasting: 1\nI0215 01:04:37.529978    4408 log.go:172] (0xc000b1adc0) Go away received\nI0215 01:04:37.530583    4408 log.go:172] (0xc000b1adc0) (0xc0009e25a0) Stream removed, broadcasting: 1\nI0215 01:04:37.530596    4408 log.go:172] (0xc000b1adc0) (0xc0009e2640) Stream removed, broadcasting: 3\nI0215 01:04:37.530603    4408 log.go:172] (0xc000b1adc0) (0xc0005ee6e0) Stream removed, broadcasting: 5\n"
Feb 15 01:04:37.535: INFO: stdout: ""
Feb 15 01:04:37.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3687 execpod75qln -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30821'
Feb 15 01:04:37.883: INFO: stderr: "I0215 01:04:37.708637    4428 log.go:172] (0xc000b38e70) (0xc0005a5f40) Create stream\nI0215 01:04:37.708799    4428 log.go:172] (0xc000b38e70) (0xc0005a5f40) Stream added, broadcasting: 1\nI0215 01:04:37.712220    4428 log.go:172] (0xc000b38e70) Reply frame received for 1\nI0215 01:04:37.712270    4428 log.go:172] (0xc000b38e70) (0xc00079e000) Create stream\nI0215 01:04:37.712283    4428 log.go:172] (0xc000b38e70) (0xc00079e000) Stream added, broadcasting: 3\nI0215 01:04:37.713415    4428 log.go:172] (0xc000b38e70) Reply frame received for 3\nI0215 01:04:37.713441    4428 log.go:172] (0xc000b38e70) (0xc00079e0a0) Create stream\nI0215 01:04:37.713450    4428 log.go:172] (0xc000b38e70) (0xc00079e0a0) Stream added, broadcasting: 5\nI0215 01:04:37.714827    4428 log.go:172] (0xc000b38e70) Reply frame received for 5\nI0215 01:04:37.790667    4428 log.go:172] (0xc000b38e70) Data frame received for 5\nI0215 01:04:37.790838    4428 log.go:172] (0xc00079e0a0) (5) Data frame handling\nI0215 01:04:37.790901    4428 log.go:172] (0xc00079e0a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30821\nI0215 01:04:37.793310    4428 log.go:172] (0xc000b38e70) Data frame received for 5\nI0215 01:04:37.793329    4428 log.go:172] (0xc00079e0a0) (5) Data frame handling\nI0215 01:04:37.793352    4428 log.go:172] (0xc00079e0a0) (5) Data frame sent\nConnection to 10.96.2.250 30821 port [tcp/30821] succeeded!\nI0215 01:04:37.872369    4428 log.go:172] (0xc000b38e70) (0xc00079e000) Stream removed, broadcasting: 3\nI0215 01:04:37.872533    4428 log.go:172] (0xc000b38e70) Data frame received for 1\nI0215 01:04:37.872566    4428 log.go:172] (0xc0005a5f40) (1) Data frame handling\nI0215 01:04:37.872622    4428 log.go:172] (0xc0005a5f40) (1) Data frame sent\nI0215 01:04:37.872678    4428 log.go:172] (0xc000b38e70) (0xc0005a5f40) Stream removed, broadcasting: 1\nI0215 01:04:37.872755    4428 log.go:172] (0xc000b38e70) (0xc00079e0a0) Stream removed, broadcasting: 5\nI0215 01:04:37.872796    4428 log.go:172] (0xc000b38e70) Go away received\nI0215 01:04:37.873726    4428 log.go:172] (0xc000b38e70) (0xc0005a5f40) Stream removed, broadcasting: 1\nI0215 01:04:37.873743    4428 log.go:172] (0xc000b38e70) (0xc00079e000) Stream removed, broadcasting: 3\nI0215 01:04:37.873758    4428 log.go:172] (0xc000b38e70) (0xc00079e0a0) Stream removed, broadcasting: 5\n"
Feb 15 01:04:37.883: INFO: stdout: ""
Feb 15 01:04:37.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3687 execpod75qln -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30821'
Feb 15 01:04:38.219: INFO: stderr: "I0215 01:04:38.070384    4448 log.go:172] (0xc00058ca50) (0xc0005ad5e0) Create stream\nI0215 01:04:38.070601    4448 log.go:172] (0xc00058ca50) (0xc0005ad5e0) Stream added, broadcasting: 1\nI0215 01:04:38.074977    4448 log.go:172] (0xc00058ca50) Reply frame received for 1\nI0215 01:04:38.075014    4448 log.go:172] (0xc00058ca50) (0xc00098c000) Create stream\nI0215 01:04:38.075029    4448 log.go:172] (0xc00058ca50) (0xc00098c000) Stream added, broadcasting: 3\nI0215 01:04:38.076300    4448 log.go:172] (0xc00058ca50) Reply frame received for 3\nI0215 01:04:38.076325    4448 log.go:172] (0xc00058ca50) (0xc0005ad680) Create stream\nI0215 01:04:38.076336    4448 log.go:172] (0xc00058ca50) (0xc0005ad680) Stream added, broadcasting: 5\nI0215 01:04:38.078035    4448 log.go:172] (0xc00058ca50) Reply frame received for 5\nI0215 01:04:38.138292    4448 log.go:172] (0xc00058ca50) Data frame received for 5\nI0215 01:04:38.138425    4448 log.go:172] (0xc0005ad680) (5) Data frame handling\nI0215 01:04:38.138483    4448 log.go:172] (0xc0005ad680) (5) Data frame sent\nI0215 01:04:38.138500    4448 log.go:172] (0xc00058ca50) Data frame received for 5\nI0215 01:04:38.138509    4448 log.go:172] (0xc0005ad680) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.1.234 30821\nI0215 01:04:38.138594    4448 log.go:172] (0xc0005ad680) (5) Data frame sent\nI0215 01:04:38.142910    4448 log.go:172] (0xc00058ca50) Data frame received for 5\nI0215 01:04:38.142925    4448 log.go:172] (0xc0005ad680) (5) Data frame handling\nI0215 01:04:38.142942    4448 log.go:172] (0xc0005ad680) (5) Data frame sent\nConnection to 10.96.1.234 30821 port [tcp/30821] succeeded!\nI0215 01:04:38.208727    4448 log.go:172] (0xc00058ca50) Data frame received for 1\nI0215 01:04:38.208966    4448 log.go:172] (0xc00058ca50) (0xc0005ad680) Stream removed, broadcasting: 5\nI0215 01:04:38.209060    4448 log.go:172] (0xc0005ad5e0) (1) Data frame handling\nI0215 01:04:38.209092    4448 log.go:172] (0xc0005ad5e0) (1) Data frame sent\nI0215 01:04:38.209121    4448 log.go:172] (0xc00058ca50) (0xc00098c000) Stream removed, broadcasting: 3\nI0215 01:04:38.209146    4448 log.go:172] (0xc00058ca50) (0xc0005ad5e0) Stream removed, broadcasting: 1\nI0215 01:04:38.209185    4448 log.go:172] (0xc00058ca50) Go away received\nI0215 01:04:38.209921    4448 log.go:172] (0xc00058ca50) (0xc0005ad5e0) Stream removed, broadcasting: 1\nI0215 01:04:38.209936    4448 log.go:172] (0xc00058ca50) (0xc00098c000) Stream removed, broadcasting: 3\nI0215 01:04:38.209943    4448 log.go:172] (0xc00058ca50) (0xc0005ad680) Stream removed, broadcasting: 5\n"
Feb 15 01:04:38.220: INFO: stdout: ""
Feb 15 01:04:38.220: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:04:38.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3687" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:30.890 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":280,"completed":181,"skipped":3085,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:04:38.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Feb 15 01:04:38.381: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:04:56.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9268" for this suite.

• [SLOW TEST:18.621 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":280,"completed":182,"skipped":3115,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:04:56.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 15 01:05:07.284: INFO: 10 pods remaining
Feb 15 01:05:07.285: INFO: 8 pods has nil DeletionTimestamp
Feb 15 01:05:07.285: INFO: 
Feb 15 01:05:08.884: INFO: 0 pods remaining
Feb 15 01:05:08.884: INFO: 0 pods has nil DeletionTimestamp
Feb 15 01:05:08.884: INFO: 
STEP: Gathering metrics
W0215 01:05:09.377125      10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 15 01:05:09.377: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:05:09.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2371" for this suite.

• [SLOW TEST:12.799 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":280,"completed":183,"skipped":3122,"failed":0}
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:05:09.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 15 01:05:10.235: INFO: Waiting up to 5m0s for pod "pod-4afa0300-4677-4cd9-8cac-15f2f14d5e41" in namespace "emptydir-7075" to be "success or failure"
Feb 15 01:05:10.263: INFO: Pod "pod-4afa0300-4677-4cd9-8cac-15f2f14d5e41": Phase="Pending", Reason="", readiness=false. Elapsed: 27.729225ms
Feb 15 01:05:12.272: INFO: Pod "pod-4afa0300-4677-4cd9-8cac-15f2f14d5e41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036634285s
Feb 15 01:05:17.611: INFO: Pod "pod-4afa0300-4677-4cd9-8cac-15f2f14d5e41": Phase="Pending", Reason="", readiness=false. Elapsed: 7.376302226s
Feb 15 01:05:24.912: INFO: Pod "pod-4afa0300-4677-4cd9-8cac-15f2f14d5e41": Phase="Pending", Reason="", readiness=false. Elapsed: 14.677218709s
Feb 15 01:05:27.399: INFO: Pod "pod-4afa0300-4677-4cd9-8cac-15f2f14d5e41": Phase="Pending", Reason="", readiness=false. Elapsed: 17.163945505s
Feb 15 01:05:29.702: INFO: Pod "pod-4afa0300-4677-4cd9-8cac-15f2f14d5e41": Phase="Pending", Reason="", readiness=false. Elapsed: 19.466644846s
Feb 15 01:05:31.709: INFO: Pod "pod-4afa0300-4677-4cd9-8cac-15f2f14d5e41": Phase="Pending", Reason="", readiness=false. Elapsed: 21.473616958s
Feb 15 01:05:33.719: INFO: Pod "pod-4afa0300-4677-4cd9-8cac-15f2f14d5e41": Phase="Pending", Reason="", readiness=false. Elapsed: 23.483684622s
Feb 15 01:05:35.726: INFO: Pod "pod-4afa0300-4677-4cd9-8cac-15f2f14d5e41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.490603886s
STEP: Saw pod success
Feb 15 01:05:35.726: INFO: Pod "pod-4afa0300-4677-4cd9-8cac-15f2f14d5e41" satisfied condition "success or failure"
Feb 15 01:05:35.730: INFO: Trying to get logs from node jerma-node pod pod-4afa0300-4677-4cd9-8cac-15f2f14d5e41 container test-container: 
STEP: delete the pod
Feb 15 01:05:35.780: INFO: Waiting for pod pod-4afa0300-4677-4cd9-8cac-15f2f14d5e41 to disappear
Feb 15 01:05:35.784: INFO: Pod pod-4afa0300-4677-4cd9-8cac-15f2f14d5e41 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:05:35.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7075" for this suite.

• [SLOW TEST:26.084 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":184,"skipped":3122,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:05:35.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:06:09.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3289" for this suite.

• [SLOW TEST:34.167 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":280,"completed":185,"skipped":3138,"failed":0}
S
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:06:10.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Feb 15 01:06:10.159: INFO: Created pod &Pod{ObjectMeta:{dns-3568  dns-3568 /api/v1/namespaces/dns-3568/pods/dns-3568 be874a90-f94b-498f-975f-7e60189cefc2 8492186 0 2020-02-15 01:06:10 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdx6w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdx6w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdx6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 15 01:06:10.168: INFO: The status of Pod dns-3568 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 01:06:12.207: INFO: The status of Pod dns-3568 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 01:06:14.175: INFO: The status of Pod dns-3568 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 01:06:16.176: INFO: The status of Pod dns-3568 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 01:06:18.176: INFO: The status of Pod dns-3568 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
Feb 15 01:06:18.176: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3568 PodName:dns-3568 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 01:06:18.176: INFO: >>> kubeConfig: /root/.kube/config
I0215 01:06:18.254479      10 log.go:172] (0xc001b78370) (0xc002e0eb40) Create stream
I0215 01:06:18.254578      10 log.go:172] (0xc001b78370) (0xc002e0eb40) Stream added, broadcasting: 1
I0215 01:06:18.258762      10 log.go:172] (0xc001b78370) Reply frame received for 1
I0215 01:06:18.258802      10 log.go:172] (0xc001b78370) (0xc001bdb360) Create stream
I0215 01:06:18.258815      10 log.go:172] (0xc001b78370) (0xc001bdb360) Stream added, broadcasting: 3
I0215 01:06:18.261039      10 log.go:172] (0xc001b78370) Reply frame received for 3
I0215 01:06:18.261095      10 log.go:172] (0xc001b78370) (0xc001eca0a0) Create stream
I0215 01:06:18.261119      10 log.go:172] (0xc001b78370) (0xc001eca0a0) Stream added, broadcasting: 5
I0215 01:06:18.267343      10 log.go:172] (0xc001b78370) Reply frame received for 5
I0215 01:06:18.394967      10 log.go:172] (0xc001b78370) Data frame received for 3
I0215 01:06:18.395040      10 log.go:172] (0xc001bdb360) (3) Data frame handling
I0215 01:06:18.395078      10 log.go:172] (0xc001bdb360) (3) Data frame sent
I0215 01:06:18.505403      10 log.go:172] (0xc001b78370) Data frame received for 1
I0215 01:06:18.505844      10 log.go:172] (0xc001b78370) (0xc001bdb360) Stream removed, broadcasting: 3
I0215 01:06:18.505969      10 log.go:172] (0xc002e0eb40) (1) Data frame handling
I0215 01:06:18.506006      10 log.go:172] (0xc002e0eb40) (1) Data frame sent
I0215 01:06:18.506049      10 log.go:172] (0xc001b78370) (0xc001eca0a0) Stream removed, broadcasting: 5
I0215 01:06:18.506147      10 log.go:172] (0xc001b78370) (0xc002e0eb40) Stream removed, broadcasting: 1
I0215 01:06:18.506252      10 log.go:172] (0xc001b78370) Go away received
I0215 01:06:18.506751      10 log.go:172] (0xc001b78370) (0xc002e0eb40) Stream removed, broadcasting: 1
I0215 01:06:18.506777      10 log.go:172] (0xc001b78370) (0xc001bdb360) Stream removed, broadcasting: 3
I0215 01:06:18.506796      10 log.go:172] (0xc001b78370) (0xc001eca0a0) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Feb 15 01:06:18.506: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3568 PodName:dns-3568 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 01:06:18.507: INFO: >>> kubeConfig: /root/.kube/config
I0215 01:06:18.558372      10 log.go:172] (0xc002b7f6b0) (0xc001bdbae0) Create stream
I0215 01:06:18.558601      10 log.go:172] (0xc002b7f6b0) (0xc001bdbae0) Stream added, broadcasting: 1
I0215 01:06:18.563717      10 log.go:172] (0xc002b7f6b0) Reply frame received for 1
I0215 01:06:18.563791      10 log.go:172] (0xc002b7f6b0) (0xc002e0ec80) Create stream
I0215 01:06:18.563803      10 log.go:172] (0xc002b7f6b0) (0xc002e0ec80) Stream added, broadcasting: 3
I0215 01:06:18.566175      10 log.go:172] (0xc002b7f6b0) Reply frame received for 3
I0215 01:06:18.566204      10 log.go:172] (0xc002b7f6b0) (0xc00241d220) Create stream
I0215 01:06:18.566214      10 log.go:172] (0xc002b7f6b0) (0xc00241d220) Stream added, broadcasting: 5
I0215 01:06:18.568419      10 log.go:172] (0xc002b7f6b0) Reply frame received for 5
I0215 01:06:18.659066      10 log.go:172] (0xc002b7f6b0) Data frame received for 3
I0215 01:06:18.659160      10 log.go:172] (0xc002e0ec80) (3) Data frame handling
I0215 01:06:18.659177      10 log.go:172] (0xc002e0ec80) (3) Data frame sent
I0215 01:06:18.735399      10 log.go:172] (0xc002b7f6b0) Data frame received for 1
I0215 01:06:18.735509      10 log.go:172] (0xc002b7f6b0) (0xc002e0ec80) Stream removed, broadcasting: 3
I0215 01:06:18.735542      10 log.go:172] (0xc001bdbae0) (1) Data frame handling
I0215 01:06:18.735562      10 log.go:172] (0xc001bdbae0) (1) Data frame sent
I0215 01:06:18.735587      10 log.go:172] (0xc002b7f6b0) (0xc00241d220) Stream removed, broadcasting: 5
I0215 01:06:18.735621      10 log.go:172] (0xc002b7f6b0) (0xc001bdbae0) Stream removed, broadcasting: 1
I0215 01:06:18.735643      10 log.go:172] (0xc002b7f6b0) Go away received
I0215 01:06:18.736471      10 log.go:172] (0xc002b7f6b0) (0xc001bdbae0) Stream removed, broadcasting: 1
I0215 01:06:18.736495      10 log.go:172] (0xc002b7f6b0) (0xc002e0ec80) Stream removed, broadcasting: 3
I0215 01:06:18.736501      10 log.go:172] (0xc002b7f6b0) (0xc00241d220) Stream removed, broadcasting: 5
Feb 15 01:06:18.736: INFO: Deleting pod dns-3568...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:06:18.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3568" for this suite.

• [SLOW TEST:8.784 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":280,"completed":186,"skipped":3139,"failed":0}
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:06:18.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-b2c87887-68b1-4cc3-8167-a5f808a58f5b
STEP: Creating a pod to test consume configMaps
Feb 15 01:06:18.912: INFO: Waiting up to 5m0s for pod "pod-configmaps-d8bc43a6-f26e-4f43-9604-aa8e8a6b1557" in namespace "configmap-3516" to be "success or failure"
Feb 15 01:06:18.929: INFO: Pod "pod-configmaps-d8bc43a6-f26e-4f43-9604-aa8e8a6b1557": Phase="Pending", Reason="", readiness=false. Elapsed: 16.684261ms
Feb 15 01:06:20.943: INFO: Pod "pod-configmaps-d8bc43a6-f26e-4f43-9604-aa8e8a6b1557": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030502058s
Feb 15 01:06:22.949: INFO: Pod "pod-configmaps-d8bc43a6-f26e-4f43-9604-aa8e8a6b1557": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037119989s
Feb 15 01:06:24.958: INFO: Pod "pod-configmaps-d8bc43a6-f26e-4f43-9604-aa8e8a6b1557": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045874418s
Feb 15 01:06:27.019: INFO: Pod "pod-configmaps-d8bc43a6-f26e-4f43-9604-aa8e8a6b1557": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107156522s
Feb 15 01:06:29.024: INFO: Pod "pod-configmaps-d8bc43a6-f26e-4f43-9604-aa8e8a6b1557": Phase="Pending", Reason="", readiness=false. Elapsed: 10.111446567s
Feb 15 01:06:31.033: INFO: Pod "pod-configmaps-d8bc43a6-f26e-4f43-9604-aa8e8a6b1557": Phase="Pending", Reason="", readiness=false. Elapsed: 12.12024907s
Feb 15 01:06:33.040: INFO: Pod "pod-configmaps-d8bc43a6-f26e-4f43-9604-aa8e8a6b1557": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.127593088s
STEP: Saw pod success
Feb 15 01:06:33.040: INFO: Pod "pod-configmaps-d8bc43a6-f26e-4f43-9604-aa8e8a6b1557" satisfied condition "success or failure"
Feb 15 01:06:33.044: INFO: Trying to get logs from node jerma-node pod pod-configmaps-d8bc43a6-f26e-4f43-9604-aa8e8a6b1557 container configmap-volume-test: 
STEP: delete the pod
Feb 15 01:06:33.161: INFO: Waiting for pod pod-configmaps-d8bc43a6-f26e-4f43-9604-aa8e8a6b1557 to disappear
Feb 15 01:06:33.169: INFO: Pod pod-configmaps-d8bc43a6-f26e-4f43-9604-aa8e8a6b1557 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:06:33.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3516" for this suite.

• [SLOW TEST:14.388 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":187,"skipped":3139,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:06:33.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-1430
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 15 01:06:33.306: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 15 01:06:33.503: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 01:06:35.612: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 01:06:37.512: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 01:06:40.678: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 01:06:41.529: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 01:06:43.857: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 01:06:45.511: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 15 01:06:47.511: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 01:06:49.511: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 01:06:51.511: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 01:06:53.511: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 01:06:55.511: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 01:06:57.509: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 15 01:06:59.525: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 15 01:06:59.534: INFO: The status of Pod netserver-1 is Running (Ready = false)
Feb 15 01:07:01.564: INFO: The status of Pod netserver-1 is Running (Ready = false)
Feb 15 01:07:03.544: INFO: The status of Pod netserver-1 is Running (Ready = false)
Feb 15 01:07:05.542: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 15 01:07:13.594: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-1430 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 01:07:13.594: INFO: >>> kubeConfig: /root/.kube/config
I0215 01:07:13.653102      10 log.go:172] (0xc001b788f0) (0xc002a68fa0) Create stream
I0215 01:07:13.653256      10 log.go:172] (0xc001b788f0) (0xc002a68fa0) Stream added, broadcasting: 1
I0215 01:07:13.657380      10 log.go:172] (0xc001b788f0) Reply frame received for 1
I0215 01:07:13.657440      10 log.go:172] (0xc001b788f0) (0xc001eca280) Create stream
I0215 01:07:13.657459      10 log.go:172] (0xc001b788f0) (0xc001eca280) Stream added, broadcasting: 3
I0215 01:07:13.659348      10 log.go:172] (0xc001b788f0) Reply frame received for 3
I0215 01:07:13.659379      10 log.go:172] (0xc001b788f0) (0xc001eca320) Create stream
I0215 01:07:13.659391      10 log.go:172] (0xc001b788f0) (0xc001eca320) Stream added, broadcasting: 5
I0215 01:07:13.660880      10 log.go:172] (0xc001b788f0) Reply frame received for 5
I0215 01:07:13.795138      10 log.go:172] (0xc001b788f0) Data frame received for 3
I0215 01:07:13.795254      10 log.go:172] (0xc001eca280) (3) Data frame handling
I0215 01:07:13.795291      10 log.go:172] (0xc001eca280) (3) Data frame sent
I0215 01:07:13.908245      10 log.go:172] (0xc001b788f0) (0xc001eca280) Stream removed, broadcasting: 3
I0215 01:07:13.908547      10 log.go:172] (0xc001b788f0) Data frame received for 1
I0215 01:07:13.908624      10 log.go:172] (0xc001b788f0) (0xc001eca320) Stream removed, broadcasting: 5
I0215 01:07:13.908697      10 log.go:172] (0xc002a68fa0) (1) Data frame handling
I0215 01:07:13.908765      10 log.go:172] (0xc002a68fa0) (1) Data frame sent
I0215 01:07:13.908927      10 log.go:172] (0xc001b788f0) (0xc002a68fa0) Stream removed, broadcasting: 1
I0215 01:07:13.908967      10 log.go:172] (0xc001b788f0) Go away received
I0215 01:07:13.909494      10 log.go:172] (0xc001b788f0) (0xc002a68fa0) Stream removed, broadcasting: 1
I0215 01:07:13.909507      10 log.go:172] (0xc001b788f0) (0xc001eca280) Stream removed, broadcasting: 3
I0215 01:07:13.909515      10 log.go:172] (0xc001b788f0) (0xc001eca320) Stream removed, broadcasting: 5
Feb 15 01:07:13.909: INFO: Waiting for responses: map[]
Feb 15 01:07:13.934: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-1430 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 15 01:07:13.934: INFO: >>> kubeConfig: /root/.kube/config
I0215 01:07:13.985327      10 log.go:172] (0xc002b7fce0) (0xc0018fa1e0) Create stream
I0215 01:07:13.985594      10 log.go:172] (0xc002b7fce0) (0xc0018fa1e0) Stream added, broadcasting: 1
I0215 01:07:13.991213      10 log.go:172] (0xc002b7fce0) Reply frame received for 1
I0215 01:07:13.991270      10 log.go:172] (0xc002b7fce0) (0xc0018fa280) Create stream
I0215 01:07:13.991291      10 log.go:172] (0xc002b7fce0) (0xc0018fa280) Stream added, broadcasting: 3
I0215 01:07:13.993867      10 log.go:172] (0xc002b7fce0) Reply frame received for 3
I0215 01:07:13.993913      10 log.go:172] (0xc002b7fce0) (0xc002ad8820) Create stream
I0215 01:07:13.993931      10 log.go:172] (0xc002b7fce0) (0xc002ad8820) Stream added, broadcasting: 5
I0215 01:07:13.996462      10 log.go:172] (0xc002b7fce0) Reply frame received for 5
I0215 01:07:14.104148      10 log.go:172] (0xc002b7fce0) Data frame received for 3
I0215 01:07:14.104341      10 log.go:172] (0xc0018fa280) (3) Data frame handling
I0215 01:07:14.104437      10 log.go:172] (0xc0018fa280) (3) Data frame sent
I0215 01:07:14.191783      10 log.go:172] (0xc002b7fce0) Data frame received for 1
I0215 01:07:14.191887      10 log.go:172] (0xc002b7fce0) (0xc002ad8820) Stream removed, broadcasting: 5
I0215 01:07:14.191921      10 log.go:172] (0xc0018fa1e0) (1) Data frame handling
I0215 01:07:14.191951      10 log.go:172] (0xc0018fa1e0) (1) Data frame sent
I0215 01:07:14.191979      10 log.go:172] (0xc002b7fce0) (0xc0018fa280) Stream removed, broadcasting: 3
I0215 01:07:14.191999      10 log.go:172] (0xc002b7fce0) (0xc0018fa1e0) Stream removed, broadcasting: 1
I0215 01:07:14.192010      10 log.go:172] (0xc002b7fce0) Go away received
I0215 01:07:14.192150      10 log.go:172] (0xc002b7fce0) (0xc0018fa1e0) Stream removed, broadcasting: 1
I0215 01:07:14.192160      10 log.go:172] (0xc002b7fce0) (0xc0018fa280) Stream removed, broadcasting: 3
I0215 01:07:14.192169      10 log.go:172] (0xc002b7fce0) (0xc002ad8820) Stream removed, broadcasting: 5
Feb 15 01:07:14.192: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:07:14.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1430" for this suite.

• [SLOW TEST:41.021 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":280,"completed":188,"skipped":3179,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:07:14.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 15 01:07:14.765: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 15 01:07:17.246: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:07:19.255: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:07:21.318: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:07:24.737: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:07:26.205: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:07:27.253: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325634, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 15 01:07:30.286: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:07:30.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-31" for this suite.
STEP: Destroying namespace "webhook-31-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:16.363 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":280,"completed":189,"skipped":3180,"failed":0}
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:07:30.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 01:07:30.767: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb 15 01:07:33.063: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:07:33.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6084" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":280,"completed":190,"skipped":3180,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:07:33.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 15 01:07:35.009: INFO: Waiting up to 5m0s for pod "pod-f7cb9f20-5d71-42d9-8ae7-7655bd819cca" in namespace "emptydir-8554" to be "success or failure"
Feb 15 01:07:35.080: INFO: Pod "pod-f7cb9f20-5d71-42d9-8ae7-7655bd819cca": Phase="Pending", Reason="", readiness=false. Elapsed: 70.172986ms
Feb 15 01:07:37.086: INFO: Pod "pod-f7cb9f20-5d71-42d9-8ae7-7655bd819cca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076462187s
Feb 15 01:07:39.117: INFO: Pod "pod-f7cb9f20-5d71-42d9-8ae7-7655bd819cca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107359996s
Feb 15 01:07:41.557: INFO: Pod "pod-f7cb9f20-5d71-42d9-8ae7-7655bd819cca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.547877694s
Feb 15 01:07:43.640: INFO: Pod "pod-f7cb9f20-5d71-42d9-8ae7-7655bd819cca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.630570069s
Feb 15 01:07:45.650: INFO: Pod "pod-f7cb9f20-5d71-42d9-8ae7-7655bd819cca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.640795601s
Feb 15 01:07:47.738: INFO: Pod "pod-f7cb9f20-5d71-42d9-8ae7-7655bd819cca": Phase="Pending", Reason="", readiness=false. Elapsed: 12.728141036s
Feb 15 01:07:49.746: INFO: Pod "pod-f7cb9f20-5d71-42d9-8ae7-7655bd819cca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.73622688s
STEP: Saw pod success
Feb 15 01:07:49.746: INFO: Pod "pod-f7cb9f20-5d71-42d9-8ae7-7655bd819cca" satisfied condition "success or failure"
Feb 15 01:07:49.750: INFO: Trying to get logs from node jerma-node pod pod-f7cb9f20-5d71-42d9-8ae7-7655bd819cca container test-container: 
STEP: delete the pod
Feb 15 01:07:49.855: INFO: Waiting for pod pod-f7cb9f20-5d71-42d9-8ae7-7655bd819cca to disappear
Feb 15 01:07:49.870: INFO: Pod pod-f7cb9f20-5d71-42d9-8ae7-7655bd819cca no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:07:49.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8554" for this suite.

• [SLOW TEST:16.159 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":191,"skipped":3192,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:07:49.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-5776
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating statefulset ss in namespace statefulset-5776
Feb 15 01:07:50.041: INFO: Found 0 stateful pods, waiting for 1
Feb 15 01:08:01.311: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Feb 15 01:08:10.047: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Feb 15 01:08:20.049: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 15 01:08:20.080: INFO: Deleting all statefulset in ns statefulset-5776
Feb 15 01:08:20.083: INFO: Scaling statefulset ss to 0
Feb 15 01:08:40.279: INFO: Waiting for statefulset status.replicas updated to 0
Feb 15 01:08:40.283: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:08:40.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5776" for this suite.

• [SLOW TEST:50.476 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":280,"completed":192,"skipped":3200,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:08:40.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 15 01:08:56.621: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 01:08:56.670: INFO: Pod pod-with-prestop-http-hook still exists
Feb 15 01:08:58.670: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 01:08:58.977: INFO: Pod pod-with-prestop-http-hook still exists
Feb 15 01:09:00.670: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 01:09:01.184: INFO: Pod pod-with-prestop-http-hook still exists
Feb 15 01:09:02.671: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 01:09:02.689: INFO: Pod pod-with-prestop-http-hook still exists
Feb 15 01:09:04.671: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 01:09:04.680: INFO: Pod pod-with-prestop-http-hook still exists
Feb 15 01:09:06.671: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 01:09:06.679: INFO: Pod pod-with-prestop-http-hook still exists
Feb 15 01:09:08.671: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 01:09:08.681: INFO: Pod pod-with-prestop-http-hook still exists
Feb 15 01:09:10.671: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 01:09:10.684: INFO: Pod pod-with-prestop-http-hook still exists
Feb 15 01:09:12.671: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 15 01:09:12.684: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:09:12.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6269" for this suite.

• [SLOW TEST:32.364 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":280,"completed":193,"skipped":3218,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:09:12.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:09:19.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3806" for this suite.
STEP: Destroying namespace "nsdeletetest-8413" for this suite.
Feb 15 01:09:19.313: INFO: Namespace nsdeletetest-8413 was already deleted
STEP: Destroying namespace "nsdeletetest-3194" for this suite.

• [SLOW TEST:6.600 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":280,"completed":194,"skipped":3236,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:09:19.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 15 01:09:19.569: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2c708c8-9852-4f83-8466-2e44f837e53e" in namespace "projected-6760" to be "success or failure"
Feb 15 01:09:19.579: INFO: Pod "downwardapi-volume-d2c708c8-9852-4f83-8466-2e44f837e53e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.705318ms
Feb 15 01:09:21.587: INFO: Pod "downwardapi-volume-d2c708c8-9852-4f83-8466-2e44f837e53e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017953642s
Feb 15 01:09:23.599: INFO: Pod "downwardapi-volume-d2c708c8-9852-4f83-8466-2e44f837e53e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030150055s
Feb 15 01:09:25.948: INFO: Pod "downwardapi-volume-d2c708c8-9852-4f83-8466-2e44f837e53e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.379367457s
Feb 15 01:09:27.955: INFO: Pod "downwardapi-volume-d2c708c8-9852-4f83-8466-2e44f837e53e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.386499467s
STEP: Saw pod success
Feb 15 01:09:27.956: INFO: Pod "downwardapi-volume-d2c708c8-9852-4f83-8466-2e44f837e53e" satisfied condition "success or failure"
Feb 15 01:09:27.960: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d2c708c8-9852-4f83-8466-2e44f837e53e container client-container: 
STEP: delete the pod
Feb 15 01:09:28.135: INFO: Waiting for pod downwardapi-volume-d2c708c8-9852-4f83-8466-2e44f837e53e to disappear
Feb 15 01:09:28.173: INFO: Pod downwardapi-volume-d2c708c8-9852-4f83-8466-2e44f837e53e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:09:28.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6760" for this suite.

• [SLOW TEST:8.865 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":195,"skipped":3267,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:09:28.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-f533bbe1-1674-4c7f-ae69-43ef7dcb64d0
STEP: Creating a pod to test consume configMaps
Feb 15 01:09:28.398: INFO: Waiting up to 5m0s for pod "pod-configmaps-d601a527-5510-40f1-b825-b4f4bd999aa6" in namespace "configmap-1671" to be "success or failure"
Feb 15 01:09:28.489: INFO: Pod "pod-configmaps-d601a527-5510-40f1-b825-b4f4bd999aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 90.623105ms
Feb 15 01:09:30.502: INFO: Pod "pod-configmaps-d601a527-5510-40f1-b825-b4f4bd999aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103202287s
Feb 15 01:09:32.511: INFO: Pod "pod-configmaps-d601a527-5510-40f1-b825-b4f4bd999aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112796664s
Feb 15 01:09:34.537: INFO: Pod "pod-configmaps-d601a527-5510-40f1-b825-b4f4bd999aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138847987s
Feb 15 01:09:36.571: INFO: Pod "pod-configmaps-d601a527-5510-40f1-b825-b4f4bd999aa6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.172893572s
STEP: Saw pod success
Feb 15 01:09:36.572: INFO: Pod "pod-configmaps-d601a527-5510-40f1-b825-b4f4bd999aa6" satisfied condition "success or failure"
Feb 15 01:09:36.591: INFO: Trying to get logs from node jerma-node pod pod-configmaps-d601a527-5510-40f1-b825-b4f4bd999aa6 container configmap-volume-test: 
STEP: delete the pod
Feb 15 01:09:36.651: INFO: Waiting for pod pod-configmaps-d601a527-5510-40f1-b825-b4f4bd999aa6 to disappear
Feb 15 01:09:36.656: INFO: Pod pod-configmaps-d601a527-5510-40f1-b825-b4f4bd999aa6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:09:36.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1671" for this suite.

• [SLOW TEST:8.503 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":196,"skipped":3268,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:09:36.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4775 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4775;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4775 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4775;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4775.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4775.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4775.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4775.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4775.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4775.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4775.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4775.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4775.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4775.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4775.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4775.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4775.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 78.141.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.141.78_udp@PTR;check="$$(dig +tcp +noall +answer +search 78.141.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.141.78_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4775 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4775;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4775 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4775;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4775.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4775.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4775.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4775.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4775.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4775.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4775.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4775.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4775.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4775.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4775.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4775.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4775.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 78.141.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.141.78_udp@PTR;check="$$(dig +tcp +noall +answer +search 78.141.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.141.78_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 15 01:09:49.089: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:49.092: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:49.095: INFO: Unable to read wheezy_udp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:49.099: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:49.102: INFO: Unable to read wheezy_udp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:49.105: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:49.108: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:49.111: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:49.130: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:49.133: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:49.136: INFO: Unable to read jessie_udp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:49.138: INFO: Unable to read jessie_tcp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:49.141: INFO: Unable to read jessie_udp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:49.143: INFO: Unable to read jessie_tcp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:49.146: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:49.148: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:49.166: INFO: Lookups using dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4775 wheezy_tcp@dns-test-service.dns-4775 wheezy_udp@dns-test-service.dns-4775.svc wheezy_tcp@dns-test-service.dns-4775.svc wheezy_udp@_http._tcp.dns-test-service.dns-4775.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4775.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4775 jessie_tcp@dns-test-service.dns-4775 jessie_udp@dns-test-service.dns-4775.svc jessie_tcp@dns-test-service.dns-4775.svc jessie_udp@_http._tcp.dns-test-service.dns-4775.svc jessie_tcp@_http._tcp.dns-test-service.dns-4775.svc]

Feb 15 01:09:54.173: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:54.178: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:54.182: INFO: Unable to read wheezy_udp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:54.187: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:54.191: INFO: Unable to read wheezy_udp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:54.196: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:54.199: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:54.203: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:54.231: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:54.234: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:54.238: INFO: Unable to read jessie_udp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:54.242: INFO: Unable to read jessie_tcp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:54.245: INFO: Unable to read jessie_udp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:54.249: INFO: Unable to read jessie_tcp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:54.253: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:54.256: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:54.289: INFO: Lookups using dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4775 wheezy_tcp@dns-test-service.dns-4775 wheezy_udp@dns-test-service.dns-4775.svc wheezy_tcp@dns-test-service.dns-4775.svc wheezy_udp@_http._tcp.dns-test-service.dns-4775.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4775.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4775 jessie_tcp@dns-test-service.dns-4775 jessie_udp@dns-test-service.dns-4775.svc jessie_tcp@dns-test-service.dns-4775.svc jessie_udp@_http._tcp.dns-test-service.dns-4775.svc jessie_tcp@_http._tcp.dns-test-service.dns-4775.svc]

Feb 15 01:09:59.262: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:59.266: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:59.269: INFO: Unable to read wheezy_udp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:59.272: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:59.275: INFO: Unable to read wheezy_udp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:59.278: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:59.282: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:59.286: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:59.310: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:59.312: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:59.315: INFO: Unable to read jessie_udp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:59.318: INFO: Unable to read jessie_tcp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:59.321: INFO: Unable to read jessie_udp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:59.324: INFO: Unable to read jessie_tcp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:59.326: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:59.329: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:09:59.346: INFO: Lookups using dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4775 wheezy_tcp@dns-test-service.dns-4775 wheezy_udp@dns-test-service.dns-4775.svc wheezy_tcp@dns-test-service.dns-4775.svc wheezy_udp@_http._tcp.dns-test-service.dns-4775.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4775.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4775 jessie_tcp@dns-test-service.dns-4775 jessie_udp@dns-test-service.dns-4775.svc jessie_tcp@dns-test-service.dns-4775.svc jessie_udp@_http._tcp.dns-test-service.dns-4775.svc jessie_tcp@_http._tcp.dns-test-service.dns-4775.svc]

Feb 15 01:10:04.173: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:04.176: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:04.180: INFO: Unable to read wheezy_udp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:04.184: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:04.188: INFO: Unable to read wheezy_udp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:04.194: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:04.199: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:04.204: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:04.244: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:04.250: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:04.254: INFO: Unable to read jessie_udp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:04.258: INFO: Unable to read jessie_tcp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:04.269: INFO: Unable to read jessie_udp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:04.276: INFO: Unable to read jessie_tcp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:04.280: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:04.284: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:04.325: INFO: Lookups using dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4775 wheezy_tcp@dns-test-service.dns-4775 wheezy_udp@dns-test-service.dns-4775.svc wheezy_tcp@dns-test-service.dns-4775.svc wheezy_udp@_http._tcp.dns-test-service.dns-4775.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4775.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4775 jessie_tcp@dns-test-service.dns-4775 jessie_udp@dns-test-service.dns-4775.svc jessie_tcp@dns-test-service.dns-4775.svc jessie_udp@_http._tcp.dns-test-service.dns-4775.svc jessie_tcp@_http._tcp.dns-test-service.dns-4775.svc]

Feb 15 01:10:09.240: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:09.277: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:09.284: INFO: Unable to read wheezy_udp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:09.290: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:09.295: INFO: Unable to read wheezy_udp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:09.300: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:09.306: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:09.312: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:09.394: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:09.408: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:09.414: INFO: Unable to read jessie_udp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:09.422: INFO: Unable to read jessie_tcp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:09.428: INFO: Unable to read jessie_udp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:09.433: INFO: Unable to read jessie_tcp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:09.438: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:09.445: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:09.532: INFO: Lookups using dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4775 wheezy_tcp@dns-test-service.dns-4775 wheezy_udp@dns-test-service.dns-4775.svc wheezy_tcp@dns-test-service.dns-4775.svc wheezy_udp@_http._tcp.dns-test-service.dns-4775.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4775.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4775 jessie_tcp@dns-test-service.dns-4775 jessie_udp@dns-test-service.dns-4775.svc jessie_tcp@dns-test-service.dns-4775.svc jessie_udp@_http._tcp.dns-test-service.dns-4775.svc jessie_tcp@_http._tcp.dns-test-service.dns-4775.svc]

Feb 15 01:10:14.175: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:14.179: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:14.182: INFO: Unable to read wheezy_udp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:14.186: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:14.189: INFO: Unable to read wheezy_udp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:14.193: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:14.197: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:14.200: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:14.225: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:14.229: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:14.232: INFO: Unable to read jessie_udp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:14.235: INFO: Unable to read jessie_tcp@dns-test-service.dns-4775 from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:14.240: INFO: Unable to read jessie_udp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:14.244: INFO: Unable to read jessie_tcp@dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:14.248: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:14.252: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4775.svc from pod dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76: the server could not find the requested resource (get pods dns-test-3aed3191-445e-4723-ac25-40a9c907ee76)
Feb 15 01:10:14.283: INFO: Lookups using dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4775 wheezy_tcp@dns-test-service.dns-4775 wheezy_udp@dns-test-service.dns-4775.svc wheezy_tcp@dns-test-service.dns-4775.svc wheezy_udp@_http._tcp.dns-test-service.dns-4775.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4775.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4775 jessie_tcp@dns-test-service.dns-4775 jessie_udp@dns-test-service.dns-4775.svc jessie_tcp@dns-test-service.dns-4775.svc jessie_udp@_http._tcp.dns-test-service.dns-4775.svc jessie_tcp@_http._tcp.dns-test-service.dns-4775.svc]

Feb 15 01:10:19.259: INFO: DNS probes using dns-4775/dns-test-3aed3191-445e-4723-ac25-40a9c907ee76 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:10:20.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4775" for this suite.

• [SLOW TEST:43.380 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":280,"completed":197,"skipped":3274,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:10:20.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 15 01:10:21.129: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 15 01:10:23.142: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325821, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325821, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325821, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325820, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:10:25.183: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325821, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325821, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325821, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325820, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:10:27.148: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325821, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325821, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325821, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325820, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:10:29.149: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325821, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325821, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325821, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717325820, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 15 01:10:32.169: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 01:10:32.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5647-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:10:33.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7872" for this suite.
STEP: Destroying namespace "webhook-7872-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:13.552 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":280,"completed":198,"skipped":3281,"failed":0}
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:10:33.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 01:10:33.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb 15 01:10:36.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7277 create -f -'
Feb 15 01:10:39.915: INFO: stderr: ""
Feb 15 01:10:39.916: INFO: stdout: "e2e-test-crd-publish-openapi-6666-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Feb 15 01:10:39.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7277 delete e2e-test-crd-publish-openapi-6666-crds test-cr'
Feb 15 01:10:40.087: INFO: stderr: ""
Feb 15 01:10:40.087: INFO: stdout: "e2e-test-crd-publish-openapi-6666-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Feb 15 01:10:40.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7277 apply -f -'
Feb 15 01:10:40.541: INFO: stderr: ""
Feb 15 01:10:40.542: INFO: stdout: "e2e-test-crd-publish-openapi-6666-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Feb 15 01:10:40.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7277 delete e2e-test-crd-publish-openapi-6666-crds test-cr'
Feb 15 01:10:40.651: INFO: stderr: ""
Feb 15 01:10:40.651: INFO: stdout: "e2e-test-crd-publish-openapi-6666-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Feb 15 01:10:40.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6666-crds'
Feb 15 01:10:41.036: INFO: stderr: ""
Feb 15 01:10:41.037: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6666-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:10:43.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7277" for this suite.

• [SLOW TEST:10.344 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":280,"completed":199,"skipped":3281,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:10:43.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:10:55.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6070" for this suite.

• [SLOW TEST:11.293 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":280,"completed":200,"skipped":3297,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:10:55.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 15 01:11:06.503: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:11:06.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9590" for this suite.

• [SLOW TEST:11.383 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":201,"skipped":3304,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:11:06.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-d66ae234-2c98-42d1-a3c5-d18380da0843
STEP: Creating a pod to test consume configMaps
Feb 15 01:11:10.697: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-13ae00bd-89c9-4d7d-9595-cd828c189aa6" in namespace "projected-2350" to be "success or failure"
Feb 15 01:11:10.885: INFO: Pod "pod-projected-configmaps-13ae00bd-89c9-4d7d-9595-cd828c189aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 186.706997ms
Feb 15 01:11:12.899: INFO: Pod "pod-projected-configmaps-13ae00bd-89c9-4d7d-9595-cd828c189aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200447638s
Feb 15 01:11:14.909: INFO: Pod "pod-projected-configmaps-13ae00bd-89c9-4d7d-9595-cd828c189aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210754853s
Feb 15 01:11:16.914: INFO: Pod "pod-projected-configmaps-13ae00bd-89c9-4d7d-9595-cd828c189aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.215661187s
Feb 15 01:11:18.926: INFO: Pod "pod-projected-configmaps-13ae00bd-89c9-4d7d-9595-cd828c189aa6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.227802227s
STEP: Saw pod success
Feb 15 01:11:18.926: INFO: Pod "pod-projected-configmaps-13ae00bd-89c9-4d7d-9595-cd828c189aa6" satisfied condition "success or failure"
Feb 15 01:11:18.932: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-13ae00bd-89c9-4d7d-9595-cd828c189aa6 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 15 01:11:18.997: INFO: Waiting for pod pod-projected-configmaps-13ae00bd-89c9-4d7d-9595-cd828c189aa6 to disappear
Feb 15 01:11:19.007: INFO: Pod pod-projected-configmaps-13ae00bd-89c9-4d7d-9595-cd828c189aa6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:11:19.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2350" for this suite.

• [SLOW TEST:12.404 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":202,"skipped":3312,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:11:19.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:11:27.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-858" for this suite.

• [SLOW TEST:8.225 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":203,"skipped":3330,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:11:27.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap configmap-5677/configmap-test-8206b974-62fd-449c-a4a3-8f4c28fdcbb6
STEP: Creating a pod to test consume configMaps
Feb 15 01:11:27.497: INFO: Waiting up to 5m0s for pod "pod-configmaps-6b9bd0e1-10c9-473a-875d-fcdfd00b0fdb" in namespace "configmap-5677" to be "success or failure"
Feb 15 01:11:27.516: INFO: Pod "pod-configmaps-6b9bd0e1-10c9-473a-875d-fcdfd00b0fdb": Phase="Pending", Reason="", readiness=false. Elapsed: 18.636321ms
Feb 15 01:11:29.523: INFO: Pod "pod-configmaps-6b9bd0e1-10c9-473a-875d-fcdfd00b0fdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024897983s
Feb 15 01:11:31.531: INFO: Pod "pod-configmaps-6b9bd0e1-10c9-473a-875d-fcdfd00b0fdb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033812986s
Feb 15 01:11:33.540: INFO: Pod "pod-configmaps-6b9bd0e1-10c9-473a-875d-fcdfd00b0fdb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042552166s
Feb 15 01:11:35.546: INFO: Pod "pod-configmaps-6b9bd0e1-10c9-473a-875d-fcdfd00b0fdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048868563s
STEP: Saw pod success
Feb 15 01:11:35.547: INFO: Pod "pod-configmaps-6b9bd0e1-10c9-473a-875d-fcdfd00b0fdb" satisfied condition "success or failure"
Feb 15 01:11:35.550: INFO: Trying to get logs from node jerma-node pod pod-configmaps-6b9bd0e1-10c9-473a-875d-fcdfd00b0fdb container env-test: 
STEP: delete the pod
Feb 15 01:11:35.613: INFO: Waiting for pod pod-configmaps-6b9bd0e1-10c9-473a-875d-fcdfd00b0fdb to disappear
Feb 15 01:11:35.644: INFO: Pod pod-configmaps-6b9bd0e1-10c9-473a-875d-fcdfd00b0fdb no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:11:35.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5677" for this suite.

• [SLOW TEST:8.433 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":204,"skipped":3371,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:11:35.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 01:11:35.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-470
I0215 01:11:35.885075      10 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-470, replica count: 1
I0215 01:11:36.936147      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:11:37.936836      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:11:38.937541      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:11:39.938308      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:11:40.939018      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:11:41.939567      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:11:42.940318      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:11:43.940814      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:11:44.941192      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:11:45.941584      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 15 01:11:46.076: INFO: Created: latency-svc-bn852
Feb 15 01:11:46.097: INFO: Got endpoints: latency-svc-bn852 [55.874388ms]
Feb 15 01:11:46.195: INFO: Created: latency-svc-xkb5g
Feb 15 01:11:46.198: INFO: Got endpoints: latency-svc-xkb5g [100.301002ms]
Feb 15 01:11:46.275: INFO: Created: latency-svc-jcfvp
Feb 15 01:11:46.364: INFO: Got endpoints: latency-svc-jcfvp [265.632488ms]
Feb 15 01:11:46.372: INFO: Created: latency-svc-m2bd7
Feb 15 01:11:46.376: INFO: Got endpoints: latency-svc-m2bd7 [278.474015ms]
Feb 15 01:11:46.403: INFO: Created: latency-svc-6ltcc
Feb 15 01:11:46.418: INFO: Got endpoints: latency-svc-6ltcc [320.024194ms]
Feb 15 01:11:46.452: INFO: Created: latency-svc-twsl6
Feb 15 01:11:46.577: INFO: Got endpoints: latency-svc-twsl6 [478.003398ms]
Feb 15 01:11:46.611: INFO: Created: latency-svc-8tzhc
Feb 15 01:11:46.635: INFO: Got endpoints: latency-svc-8tzhc [536.844725ms]
Feb 15 01:11:46.657: INFO: Created: latency-svc-7k8mx
Feb 15 01:11:46.665: INFO: Got endpoints: latency-svc-7k8mx [566.650712ms]
Feb 15 01:11:46.732: INFO: Created: latency-svc-rj5rq
Feb 15 01:11:46.738: INFO: Got endpoints: latency-svc-rj5rq [639.972395ms]
Feb 15 01:11:46.779: INFO: Created: latency-svc-vv5wg
Feb 15 01:11:46.797: INFO: Got endpoints: latency-svc-vv5wg [698.595105ms]
Feb 15 01:11:46.996: INFO: Created: latency-svc-kjcfj
Feb 15 01:11:46.999: INFO: Got endpoints: latency-svc-kjcfj [900.254188ms]
Feb 15 01:11:47.046: INFO: Created: latency-svc-ldmmr
Feb 15 01:11:47.082: INFO: Got endpoints: latency-svc-ldmmr [983.140986ms]
Feb 15 01:11:47.092: INFO: Created: latency-svc-xpgg7
Feb 15 01:11:47.154: INFO: Got endpoints: latency-svc-xpgg7 [1.055587874s]
Feb 15 01:11:47.157: INFO: Created: latency-svc-hm9p7
Feb 15 01:11:47.158: INFO: Got endpoints: latency-svc-hm9p7 [1.060520852s]
Feb 15 01:11:47.218: INFO: Created: latency-svc-ghnwh
Feb 15 01:11:47.223: INFO: Got endpoints: latency-svc-ghnwh [1.124855466s]
Feb 15 01:11:47.313: INFO: Created: latency-svc-gdvlf
Feb 15 01:11:47.338: INFO: Got endpoints: latency-svc-gdvlf [183.643247ms]
Feb 15 01:11:47.341: INFO: Created: latency-svc-mkwhz
Feb 15 01:11:47.354: INFO: Got endpoints: latency-svc-mkwhz [1.255901693s]
Feb 15 01:11:47.382: INFO: Created: latency-svc-b48vv
Feb 15 01:11:47.459: INFO: Got endpoints: latency-svc-b48vv [1.260693851s]
Feb 15 01:11:47.460: INFO: Created: latency-svc-g5bbx
Feb 15 01:11:47.469: INFO: Got endpoints: latency-svc-g5bbx [1.104776241s]
Feb 15 01:11:47.523: INFO: Created: latency-svc-f4xs4
Feb 15 01:11:47.529: INFO: Got endpoints: latency-svc-f4xs4 [1.152470734s]
Feb 15 01:11:47.622: INFO: Created: latency-svc-l945n
Feb 15 01:11:47.651: INFO: Created: latency-svc-kfrqk
Feb 15 01:11:47.657: INFO: Got endpoints: latency-svc-l945n [1.238877287s]
Feb 15 01:11:47.659: INFO: Got endpoints: latency-svc-kfrqk [1.081764544s]
Feb 15 01:11:47.685: INFO: Created: latency-svc-q7kqn
Feb 15 01:11:47.698: INFO: Got endpoints: latency-svc-q7kqn [1.063505828s]
Feb 15 01:11:47.724: INFO: Created: latency-svc-q26jw
Feb 15 01:11:47.797: INFO: Got endpoints: latency-svc-q26jw [1.131692541s]
Feb 15 01:11:47.810: INFO: Created: latency-svc-hqf8m
Feb 15 01:11:47.814: INFO: Got endpoints: latency-svc-hqf8m [1.075155063s]
Feb 15 01:11:47.852: INFO: Created: latency-svc-99xrb
Feb 15 01:11:47.865: INFO: Got endpoints: latency-svc-99xrb [1.068177233s]
Feb 15 01:11:47.951: INFO: Created: latency-svc-kjmpt
Feb 15 01:11:47.951: INFO: Got endpoints: latency-svc-kjmpt [952.695142ms]
Feb 15 01:11:47.975: INFO: Created: latency-svc-qf2nz
Feb 15 01:11:47.990: INFO: Got endpoints: latency-svc-qf2nz [908.573956ms]
Feb 15 01:11:48.005: INFO: Created: latency-svc-p9675
Feb 15 01:11:48.027: INFO: Created: latency-svc-hkjx9
Feb 15 01:11:48.028: INFO: Got endpoints: latency-svc-p9675 [869.308233ms]
Feb 15 01:11:48.030: INFO: Got endpoints: latency-svc-hkjx9 [806.912258ms]
Feb 15 01:11:48.112: INFO: Created: latency-svc-nmddq
Feb 15 01:11:48.112: INFO: Got endpoints: latency-svc-nmddq [773.841023ms]
Feb 15 01:11:48.146: INFO: Created: latency-svc-wtvfn
Feb 15 01:11:48.153: INFO: Got endpoints: latency-svc-wtvfn [798.416828ms]
Feb 15 01:11:48.195: INFO: Created: latency-svc-slsh7
Feb 15 01:11:48.279: INFO: Got endpoints: latency-svc-slsh7 [819.677543ms]
Feb 15 01:11:48.318: INFO: Created: latency-svc-f5l2n
Feb 15 01:11:48.331: INFO: Got endpoints: latency-svc-f5l2n [861.56819ms]
Feb 15 01:11:48.358: INFO: Created: latency-svc-zk7l9
Feb 15 01:11:48.478: INFO: Got endpoints: latency-svc-zk7l9 [948.792923ms]
Feb 15 01:11:48.486: INFO: Created: latency-svc-bp8xs
Feb 15 01:11:48.506: INFO: Got endpoints: latency-svc-bp8xs [847.706516ms]
Feb 15 01:11:48.570: INFO: Created: latency-svc-nww7j
Feb 15 01:11:48.571: INFO: Got endpoints: latency-svc-nww7j [913.963134ms]
Feb 15 01:11:48.630: INFO: Created: latency-svc-lr5z4
Feb 15 01:11:48.640: INFO: Got endpoints: latency-svc-lr5z4 [941.63728ms]
Feb 15 01:11:48.704: INFO: Created: latency-svc-8tz2c
Feb 15 01:11:48.717: INFO: Got endpoints: latency-svc-8tz2c [920.023173ms]
Feb 15 01:11:48.835: INFO: Created: latency-svc-d4v8d
Feb 15 01:11:48.853: INFO: Got endpoints: latency-svc-d4v8d [1.039069034s]
Feb 15 01:11:48.996: INFO: Created: latency-svc-q9lg7
Feb 15 01:11:49.038: INFO: Got endpoints: latency-svc-q9lg7 [1.172377016s]
Feb 15 01:11:49.047: INFO: Created: latency-svc-qnk6z
Feb 15 01:11:49.057: INFO: Got endpoints: latency-svc-qnk6z [1.105171644s]
Feb 15 01:11:49.165: INFO: Created: latency-svc-l9wsp
Feb 15 01:11:49.191: INFO: Created: latency-svc-dpqrs
Feb 15 01:11:49.193: INFO: Got endpoints: latency-svc-l9wsp [1.202491046s]
Feb 15 01:11:49.206: INFO: Got endpoints: latency-svc-dpqrs [1.178602708s]
Feb 15 01:11:49.236: INFO: Created: latency-svc-d8pcl
Feb 15 01:11:49.253: INFO: Got endpoints: latency-svc-d8pcl [1.222123605s]
Feb 15 01:11:49.255: INFO: Created: latency-svc-4xqxq
Feb 15 01:11:49.302: INFO: Got endpoints: latency-svc-4xqxq [1.190362138s]
Feb 15 01:11:49.323: INFO: Created: latency-svc-hhjcz
Feb 15 01:11:49.336: INFO: Got endpoints: latency-svc-hhjcz [1.182637122s]
Feb 15 01:11:49.337: INFO: Created: latency-svc-ppjcd
Feb 15 01:11:49.342: INFO: Got endpoints: latency-svc-ppjcd [1.062737023s]
Feb 15 01:11:49.366: INFO: Created: latency-svc-nnxmt
Feb 15 01:11:49.375: INFO: Got endpoints: latency-svc-nnxmt [1.044099234s]
Feb 15 01:11:49.398: INFO: Created: latency-svc-qzrgl
Feb 15 01:11:49.473: INFO: Got endpoints: latency-svc-qzrgl [994.741243ms]
Feb 15 01:11:49.477: INFO: Created: latency-svc-frlkr
Feb 15 01:11:49.499: INFO: Got endpoints: latency-svc-frlkr [992.400064ms]
Feb 15 01:11:49.528: INFO: Created: latency-svc-f859q
Feb 15 01:11:49.545: INFO: Got endpoints: latency-svc-f859q [973.766898ms]
Feb 15 01:11:49.636: INFO: Created: latency-svc-64wzk
Feb 15 01:11:49.657: INFO: Got endpoints: latency-svc-64wzk [1.017050064s]
Feb 15 01:11:49.697: INFO: Created: latency-svc-qxjxv
Feb 15 01:11:49.711: INFO: Got endpoints: latency-svc-qxjxv [993.266633ms]
Feb 15 01:11:49.732: INFO: Created: latency-svc-gdktn
Feb 15 01:11:49.770: INFO: Got endpoints: latency-svc-gdktn [917.420642ms]
Feb 15 01:11:49.820: INFO: Created: latency-svc-qd6rb
Feb 15 01:11:49.829: INFO: Got endpoints: latency-svc-qd6rb [790.947596ms]
Feb 15 01:11:49.867: INFO: Created: latency-svc-gmlkh
Feb 15 01:11:49.899: INFO: Got endpoints: latency-svc-gmlkh [842.828246ms]
Feb 15 01:11:49.929: INFO: Created: latency-svc-24bxd
Feb 15 01:11:49.931: INFO: Got endpoints: latency-svc-24bxd [737.791342ms]
Feb 15 01:11:50.046: INFO: Created: latency-svc-tfwh2
Feb 15 01:11:50.055: INFO: Got endpoints: latency-svc-tfwh2 [848.242061ms]
Feb 15 01:11:50.081: INFO: Created: latency-svc-mz78c
Feb 15 01:11:50.092: INFO: Got endpoints: latency-svc-mz78c [838.689228ms]
Feb 15 01:11:50.131: INFO: Created: latency-svc-5qfkd
Feb 15 01:11:50.213: INFO: Got endpoints: latency-svc-5qfkd [910.387365ms]
Feb 15 01:11:50.221: INFO: Created: latency-svc-7pghs
Feb 15 01:11:50.221: INFO: Got endpoints: latency-svc-7pghs [884.881614ms]
Feb 15 01:11:50.258: INFO: Created: latency-svc-f2r4g
Feb 15 01:11:50.264: INFO: Got endpoints: latency-svc-f2r4g [922.066039ms]
Feb 15 01:11:50.286: INFO: Created: latency-svc-9lxfc
Feb 15 01:11:50.297: INFO: Got endpoints: latency-svc-9lxfc [921.123392ms]
Feb 15 01:11:50.382: INFO: Created: latency-svc-mq27w
Feb 15 01:11:50.382: INFO: Got endpoints: latency-svc-mq27w [908.865344ms]
Feb 15 01:11:50.428: INFO: Created: latency-svc-4n2k5
Feb 15 01:11:50.430: INFO: Got endpoints: latency-svc-4n2k5 [930.730724ms]
Feb 15 01:11:50.477: INFO: Created: latency-svc-26hfp
Feb 15 01:11:50.533: INFO: Got endpoints: latency-svc-26hfp [987.401528ms]
Feb 15 01:11:50.552: INFO: Created: latency-svc-t6zp2
Feb 15 01:11:50.585: INFO: Got endpoints: latency-svc-t6zp2 [927.153404ms]
Feb 15 01:11:50.585: INFO: Created: latency-svc-hcmjp
Feb 15 01:11:50.692: INFO: Got endpoints: latency-svc-hcmjp [981.753691ms]
Feb 15 01:11:50.726: INFO: Created: latency-svc-cc6d2
Feb 15 01:11:50.736: INFO: Got endpoints: latency-svc-cc6d2 [965.150924ms]
Feb 15 01:11:50.781: INFO: Created: latency-svc-2jzwl
Feb 15 01:11:50.783: INFO: Got endpoints: latency-svc-2jzwl [953.501653ms]
Feb 15 01:11:50.875: INFO: Created: latency-svc-65544
Feb 15 01:11:50.924: INFO: Got endpoints: latency-svc-65544 [1.024465673s]
Feb 15 01:11:50.950: INFO: Created: latency-svc-rgsnj
Feb 15 01:11:51.130: INFO: Got endpoints: latency-svc-rgsnj [1.199081278s]
Feb 15 01:11:51.150: INFO: Created: latency-svc-cksmt
Feb 15 01:11:51.166: INFO: Got endpoints: latency-svc-cksmt [1.111083527s]
Feb 15 01:11:51.220: INFO: Created: latency-svc-826bn
Feb 15 01:11:51.319: INFO: Got endpoints: latency-svc-826bn [1.226888495s]
Feb 15 01:11:51.327: INFO: Created: latency-svc-mjwjh
Feb 15 01:11:51.344: INFO: Got endpoints: latency-svc-mjwjh [1.131026195s]
Feb 15 01:11:51.367: INFO: Created: latency-svc-qrxbs
Feb 15 01:11:51.377: INFO: Got endpoints: latency-svc-qrxbs [1.156145747s]
Feb 15 01:11:51.397: INFO: Created: latency-svc-vf7ll
Feb 15 01:11:51.404: INFO: Got endpoints: latency-svc-vf7ll [1.140235162s]
Feb 15 01:11:51.480: INFO: Created: latency-svc-xrv9k
Feb 15 01:11:51.486: INFO: Got endpoints: latency-svc-xrv9k [1.18981997s]
Feb 15 01:11:51.516: INFO: Created: latency-svc-td8d8
Feb 15 01:11:51.520: INFO: Got endpoints: latency-svc-td8d8 [1.137711775s]
Feb 15 01:11:51.536: INFO: Created: latency-svc-z2nm9
Feb 15 01:11:51.538: INFO: Got endpoints: latency-svc-z2nm9 [1.107862586s]
Feb 15 01:11:51.566: INFO: Created: latency-svc-lcczt
Feb 15 01:11:51.609: INFO: Got endpoints: latency-svc-lcczt [1.076653842s]
Feb 15 01:11:51.636: INFO: Created: latency-svc-rml4z
Feb 15 01:11:51.641: INFO: Got endpoints: latency-svc-rml4z [1.05519588s]
Feb 15 01:11:51.664: INFO: Created: latency-svc-mf64n
Feb 15 01:11:51.672: INFO: Got endpoints: latency-svc-mf64n [979.681878ms]
Feb 15 01:11:51.694: INFO: Created: latency-svc-4n9jj
Feb 15 01:11:51.702: INFO: Got endpoints: latency-svc-4n9jj [966.400004ms]
Feb 15 01:11:51.767: INFO: Created: latency-svc-76bzq
Feb 15 01:11:51.785: INFO: Got endpoints: latency-svc-76bzq [1.002082462s]
Feb 15 01:11:51.795: INFO: Created: latency-svc-kpxpq
Feb 15 01:11:51.797: INFO: Got endpoints: latency-svc-kpxpq [872.730556ms]
Feb 15 01:11:51.846: INFO: Created: latency-svc-cm8fb
Feb 15 01:11:51.856: INFO: Got endpoints: latency-svc-cm8fb [725.245328ms]
Feb 15 01:11:51.930: INFO: Created: latency-svc-c2wzp
Feb 15 01:11:51.992: INFO: Got endpoints: latency-svc-c2wzp [825.3755ms]
Feb 15 01:11:51.998: INFO: Created: latency-svc-5j79z
Feb 15 01:11:52.002: INFO: Got endpoints: latency-svc-5j79z [683.514281ms]
Feb 15 01:11:52.134: INFO: Created: latency-svc-j5g7l
Feb 15 01:11:52.187: INFO: Created: latency-svc-fqk4r
Feb 15 01:11:52.198: INFO: Got endpoints: latency-svc-j5g7l [853.354607ms]
Feb 15 01:11:52.263: INFO: Got endpoints: latency-svc-fqk4r [885.703762ms]
Feb 15 01:11:52.265: INFO: Created: latency-svc-6nc5h
Feb 15 01:11:52.275: INFO: Got endpoints: latency-svc-6nc5h [871.162306ms]
Feb 15 01:11:52.293: INFO: Created: latency-svc-b88wj
Feb 15 01:11:52.303: INFO: Got endpoints: latency-svc-b88wj [816.354501ms]
Feb 15 01:11:52.351: INFO: Created: latency-svc-7vz88
Feb 15 01:11:52.407: INFO: Created: latency-svc-r4mjl
Feb 15 01:11:52.407: INFO: Got endpoints: latency-svc-7vz88 [887.08937ms]
Feb 15 01:11:52.415: INFO: Got endpoints: latency-svc-r4mjl [877.085707ms]
Feb 15 01:11:52.450: INFO: Created: latency-svc-ftccs
Feb 15 01:11:52.481: INFO: Got endpoints: latency-svc-ftccs [871.433849ms]
Feb 15 01:11:52.571: INFO: Created: latency-svc-wzxwg
Feb 15 01:11:52.575: INFO: Got endpoints: latency-svc-wzxwg [934.142162ms]
Feb 15 01:11:52.627: INFO: Created: latency-svc-66v89
Feb 15 01:11:52.636: INFO: Got endpoints: latency-svc-66v89 [963.690419ms]
Feb 15 01:11:52.647: INFO: Created: latency-svc-j7q9m
Feb 15 01:11:52.658: INFO: Got endpoints: latency-svc-j7q9m [955.361611ms]
Feb 15 01:11:52.783: INFO: Created: latency-svc-4rm8b
Feb 15 01:11:52.794: INFO: Got endpoints: latency-svc-4rm8b [1.008934447s]
Feb 15 01:11:52.820: INFO: Created: latency-svc-svf5b
Feb 15 01:11:52.825: INFO: Got endpoints: latency-svc-svf5b [1.027314986s]
Feb 15 01:11:52.853: INFO: Created: latency-svc-wlmvg
Feb 15 01:11:52.877: INFO: Created: latency-svc-gdhr9
Feb 15 01:11:52.877: INFO: Got endpoints: latency-svc-wlmvg [1.021047205s]
Feb 15 01:11:52.921: INFO: Got endpoints: latency-svc-gdhr9 [929.0145ms]
Feb 15 01:11:52.963: INFO: Created: latency-svc-n7cxz
Feb 15 01:11:52.977: INFO: Got endpoints: latency-svc-n7cxz [974.871399ms]
Feb 15 01:11:53.106: INFO: Created: latency-svc-vdz8h
Feb 15 01:11:53.148: INFO: Got endpoints: latency-svc-vdz8h [949.11433ms]
Feb 15 01:11:53.152: INFO: Created: latency-svc-fk2fw
Feb 15 01:11:53.177: INFO: Got endpoints: latency-svc-fk2fw [913.949538ms]
Feb 15 01:11:53.206: INFO: Created: latency-svc-vv96n
Feb 15 01:11:53.237: INFO: Got endpoints: latency-svc-vv96n [961.197376ms]
Feb 15 01:11:53.258: INFO: Created: latency-svc-bfh7l
Feb 15 01:11:53.273: INFO: Got endpoints: latency-svc-bfh7l [969.431562ms]
Feb 15 01:11:53.275: INFO: Created: latency-svc-98zg9
Feb 15 01:11:53.279: INFO: Got endpoints: latency-svc-98zg9 [871.390671ms]
Feb 15 01:11:53.324: INFO: Created: latency-svc-46vfv
Feb 15 01:11:53.335: INFO: Got endpoints: latency-svc-46vfv [919.909395ms]
Feb 15 01:11:53.444: INFO: Created: latency-svc-fb5hp
Feb 15 01:11:53.459: INFO: Got endpoints: latency-svc-fb5hp [977.699946ms]
Feb 15 01:11:53.493: INFO: Created: latency-svc-zdvb8
Feb 15 01:11:53.530: INFO: Got endpoints: latency-svc-zdvb8 [955.194641ms]
Feb 15 01:11:53.535: INFO: Created: latency-svc-dmrzr
Feb 15 01:11:53.585: INFO: Got endpoints: latency-svc-dmrzr [948.473562ms]
Feb 15 01:11:53.606: INFO: Created: latency-svc-fjwbj
Feb 15 01:11:53.618: INFO: Got endpoints: latency-svc-fjwbj [959.988856ms]
Feb 15 01:11:53.661: INFO: Created: latency-svc-n69cf
Feb 15 01:11:53.671: INFO: Got endpoints: latency-svc-n69cf [876.34018ms]
Feb 15 01:11:53.744: INFO: Created: latency-svc-jbtp8
Feb 15 01:11:53.755: INFO: Got endpoints: latency-svc-jbtp8 [930.078829ms]
Feb 15 01:11:53.809: INFO: Created: latency-svc-2kkrl
Feb 15 01:11:53.819: INFO: Got endpoints: latency-svc-2kkrl [941.608556ms]
Feb 15 01:11:53.898: INFO: Created: latency-svc-sq2nc
Feb 15 01:11:53.939: INFO: Got endpoints: latency-svc-sq2nc [1.01745843s]
Feb 15 01:11:53.939: INFO: Created: latency-svc-wsfms
Feb 15 01:11:53.944: INFO: Got endpoints: latency-svc-wsfms [966.966406ms]
Feb 15 01:11:54.003: INFO: Created: latency-svc-zbzw2
Feb 15 01:11:54.039: INFO: Got endpoints: latency-svc-zbzw2 [890.896125ms]
Feb 15 01:11:54.112: INFO: Created: latency-svc-r96mb
Feb 15 01:11:54.134: INFO: Got endpoints: latency-svc-r96mb [956.512293ms]
Feb 15 01:11:54.172: INFO: Created: latency-svc-jbmns
Feb 15 01:11:54.180: INFO: Got endpoints: latency-svc-jbmns [942.568882ms]
Feb 15 01:11:54.238: INFO: Created: latency-svc-g482m
Feb 15 01:11:54.253: INFO: Got endpoints: latency-svc-g482m [979.754468ms]
Feb 15 01:11:54.314: INFO: Created: latency-svc-vv286
Feb 15 01:11:54.365: INFO: Created: latency-svc-268gx
Feb 15 01:11:54.365: INFO: Got endpoints: latency-svc-vv286 [1.086550587s]
Feb 15 01:11:54.393: INFO: Got endpoints: latency-svc-268gx [1.0576281s]
Feb 15 01:11:54.468: INFO: Created: latency-svc-ndbgl
Feb 15 01:11:54.503: INFO: Created: latency-svc-pz662
Feb 15 01:11:54.503: INFO: Got endpoints: latency-svc-ndbgl [1.043490874s]
Feb 15 01:11:54.542: INFO: Got endpoints: latency-svc-pz662 [1.011205067s]
Feb 15 01:11:54.546: INFO: Created: latency-svc-zj8lp
Feb 15 01:11:54.644: INFO: Got endpoints: latency-svc-zj8lp [1.059413017s]
Feb 15 01:11:54.652: INFO: Created: latency-svc-pvhvc
Feb 15 01:11:54.659: INFO: Got endpoints: latency-svc-pvhvc [1.040695011s]
Feb 15 01:11:54.683: INFO: Created: latency-svc-6s24f
Feb 15 01:11:54.685: INFO: Got endpoints: latency-svc-6s24f [1.014511572s]
Feb 15 01:11:54.700: INFO: Created: latency-svc-4jhhh
Feb 15 01:11:54.708: INFO: Got endpoints: latency-svc-4jhhh [952.42795ms]
Feb 15 01:11:54.860: INFO: Created: latency-svc-b47vb
Feb 15 01:11:54.889: INFO: Got endpoints: latency-svc-b47vb [1.069711237s]
Feb 15 01:11:54.890: INFO: Created: latency-svc-b2m4v
Feb 15 01:11:54.906: INFO: Got endpoints: latency-svc-b2m4v [967.249703ms]
Feb 15 01:11:55.054: INFO: Created: latency-svc-zvk5g
Feb 15 01:11:55.083: INFO: Got endpoints: latency-svc-zvk5g [1.138144137s]
Feb 15 01:11:55.092: INFO: Created: latency-svc-v5w27
Feb 15 01:11:55.099: INFO: Got endpoints: latency-svc-v5w27 [1.06016603s]
Feb 15 01:11:55.136: INFO: Created: latency-svc-p8cgm
Feb 15 01:11:55.145: INFO: Got endpoints: latency-svc-p8cgm [1.011057197s]
Feb 15 01:11:55.253: INFO: Created: latency-svc-pmxwn
Feb 15 01:11:55.266: INFO: Got endpoints: latency-svc-pmxwn [1.086122432s]
Feb 15 01:11:55.303: INFO: Created: latency-svc-fv6nm
Feb 15 01:11:55.305: INFO: Got endpoints: latency-svc-fv6nm [1.052497274s]
Feb 15 01:11:55.328: INFO: Created: latency-svc-zwp4n
Feb 15 01:11:55.351: INFO: Got endpoints: latency-svc-zwp4n [985.425748ms]
Feb 15 01:11:55.352: INFO: Created: latency-svc-4c6x5
Feb 15 01:11:55.405: INFO: Got endpoints: latency-svc-4c6x5 [1.01210987s]
Feb 15 01:11:55.440: INFO: Created: latency-svc-ft2jg
Feb 15 01:11:55.448: INFO: Got endpoints: latency-svc-ft2jg [945.070316ms]
Feb 15 01:11:55.476: INFO: Created: latency-svc-4kqbp
Feb 15 01:11:55.542: INFO: Got endpoints: latency-svc-4kqbp [999.441734ms]
Feb 15 01:11:55.562: INFO: Created: latency-svc-dghkq
Feb 15 01:11:55.583: INFO: Got endpoints: latency-svc-dghkq [938.404436ms]
Feb 15 01:11:55.589: INFO: Created: latency-svc-jb6cz
Feb 15 01:11:55.600: INFO: Got endpoints: latency-svc-jb6cz [940.625966ms]
Feb 15 01:11:55.634: INFO: Created: latency-svc-hpc9k
Feb 15 01:11:55.637: INFO: Got endpoints: latency-svc-hpc9k [951.424879ms]
Feb 15 01:11:55.688: INFO: Created: latency-svc-vks7j
Feb 15 01:11:55.691: INFO: Got endpoints: latency-svc-vks7j [982.837546ms]
Feb 15 01:11:55.717: INFO: Created: latency-svc-sw4km
Feb 15 01:11:55.726: INFO: Got endpoints: latency-svc-sw4km [836.018139ms]
Feb 15 01:11:55.750: INFO: Created: latency-svc-6fvq7
Feb 15 01:11:55.753: INFO: Got endpoints: latency-svc-6fvq7 [846.299031ms]
Feb 15 01:11:55.842: INFO: Created: latency-svc-vwcw9
Feb 15 01:11:55.842: INFO: Got endpoints: latency-svc-vwcw9 [759.006434ms]
Feb 15 01:11:55.896: INFO: Created: latency-svc-b8d6l
Feb 15 01:11:55.898: INFO: Got endpoints: latency-svc-b8d6l [798.614116ms]
Feb 15 01:11:55.925: INFO: Created: latency-svc-5lv6b
Feb 15 01:11:55.931: INFO: Got endpoints: latency-svc-5lv6b [786.370554ms]
Feb 15 01:11:55.991: INFO: Created: latency-svc-bwjqw
Feb 15 01:11:55.997: INFO: Got endpoints: latency-svc-bwjqw [730.753796ms]
Feb 15 01:11:56.028: INFO: Created: latency-svc-lgp4m
Feb 15 01:11:56.028: INFO: Got endpoints: latency-svc-lgp4m [722.543946ms]
Feb 15 01:11:56.050: INFO: Created: latency-svc-9hkwv
Feb 15 01:11:56.051: INFO: Got endpoints: latency-svc-9hkwv [700.173454ms]
Feb 15 01:11:56.074: INFO: Created: latency-svc-8hrfd
Feb 15 01:11:56.081: INFO: Got endpoints: latency-svc-8hrfd [676.41088ms]
Feb 15 01:11:56.138: INFO: Created: latency-svc-dhtmj
Feb 15 01:11:56.149: INFO: Got endpoints: latency-svc-dhtmj [700.480493ms]
Feb 15 01:11:56.165: INFO: Created: latency-svc-75n9m
Feb 15 01:11:56.169: INFO: Got endpoints: latency-svc-75n9m [627.755687ms]
Feb 15 01:11:56.186: INFO: Created: latency-svc-lzcbx
Feb 15 01:11:56.273: INFO: Got endpoints: latency-svc-lzcbx [689.471146ms]
Feb 15 01:11:56.294: INFO: Created: latency-svc-vdgfx
Feb 15 01:11:56.300: INFO: Got endpoints: latency-svc-vdgfx [700.267305ms]
Feb 15 01:11:56.332: INFO: Created: latency-svc-vp6hn
Feb 15 01:11:56.364: INFO: Got endpoints: latency-svc-vp6hn [727.407965ms]
Feb 15 01:11:56.448: INFO: Created: latency-svc-2jksb
Feb 15 01:11:56.477: INFO: Got endpoints: latency-svc-2jksb [786.478354ms]
Feb 15 01:11:56.479: INFO: Created: latency-svc-bmwxc
Feb 15 01:11:56.519: INFO: Got endpoints: latency-svc-bmwxc [792.948862ms]
Feb 15 01:11:56.525: INFO: Created: latency-svc-hc245
Feb 15 01:11:56.546: INFO: Got endpoints: latency-svc-hc245 [792.830001ms]
Feb 15 01:11:56.588: INFO: Created: latency-svc-l4z8r
Feb 15 01:11:56.599: INFO: Got endpoints: latency-svc-l4z8r [756.950022ms]
Feb 15 01:11:56.703: INFO: Created: latency-svc-2m72s
Feb 15 01:11:56.775: INFO: Got endpoints: latency-svc-2m72s [877.554085ms]
Feb 15 01:11:56.784: INFO: Created: latency-svc-6t4kz
Feb 15 01:11:56.805: INFO: Got endpoints: latency-svc-6t4kz [873.520523ms]
Feb 15 01:11:56.828: INFO: Created: latency-svc-z84mj
Feb 15 01:11:56.841: INFO: Got endpoints: latency-svc-z84mj [844.218978ms]
Feb 15 01:11:56.874: INFO: Created: latency-svc-9z7vz
Feb 15 01:11:56.988: INFO: Got endpoints: latency-svc-9z7vz [960.157785ms]
Feb 15 01:11:56.993: INFO: Created: latency-svc-drx4g
Feb 15 01:11:56.999: INFO: Got endpoints: latency-svc-drx4g [948.180728ms]
Feb 15 01:11:57.115: INFO: Created: latency-svc-ntsts
Feb 15 01:11:57.139: INFO: Got endpoints: latency-svc-ntsts [1.056986938s]
Feb 15 01:11:57.139: INFO: Created: latency-svc-pv7sj
Feb 15 01:11:57.147: INFO: Got endpoints: latency-svc-pv7sj [997.894271ms]
Feb 15 01:11:57.176: INFO: Created: latency-svc-sdzrw
Feb 15 01:11:57.176: INFO: Got endpoints: latency-svc-sdzrw [1.006714617s]
Feb 15 01:11:57.276: INFO: Created: latency-svc-pvg9c
Feb 15 01:11:57.283: INFO: Got endpoints: latency-svc-pvg9c [1.010597529s]
Feb 15 01:11:57.313: INFO: Created: latency-svc-9tkch
Feb 15 01:11:57.316: INFO: Got endpoints: latency-svc-9tkch [1.015968028s]
Feb 15 01:11:57.511: INFO: Created: latency-svc-qgjnb
Feb 15 01:11:57.545: INFO: Created: latency-svc-t77db
Feb 15 01:11:57.546: INFO: Got endpoints: latency-svc-qgjnb [1.181197522s]
Feb 15 01:11:57.565: INFO: Got endpoints: latency-svc-t77db [1.08707621s]
Feb 15 01:11:57.607: INFO: Created: latency-svc-276r6
Feb 15 01:11:57.755: INFO: Got endpoints: latency-svc-276r6 [1.235842274s]
Feb 15 01:11:57.761: INFO: Created: latency-svc-d945p
Feb 15 01:11:57.763: INFO: Got endpoints: latency-svc-d945p [1.216871581s]
Feb 15 01:11:57.983: INFO: Created: latency-svc-7dzql
Feb 15 01:11:58.025: INFO: Got endpoints: latency-svc-7dzql [1.425544225s]
Feb 15 01:11:58.028: INFO: Created: latency-svc-xnxrh
Feb 15 01:11:58.046: INFO: Got endpoints: latency-svc-xnxrh [1.270879476s]
Feb 15 01:11:58.163: INFO: Created: latency-svc-kb4fn
Feb 15 01:11:58.179: INFO: Got endpoints: latency-svc-kb4fn [1.373279865s]
Feb 15 01:11:58.374: INFO: Created: latency-svc-dfsbx
Feb 15 01:11:58.384: INFO: Got endpoints: latency-svc-dfsbx [1.542782302s]
Feb 15 01:11:58.447: INFO: Created: latency-svc-29k6l
Feb 15 01:11:58.456: INFO: Got endpoints: latency-svc-29k6l [1.468065452s]
Feb 15 01:11:58.578: INFO: Created: latency-svc-ksghc
Feb 15 01:11:58.588: INFO: Got endpoints: latency-svc-ksghc [1.588620651s]
Feb 15 01:11:58.608: INFO: Created: latency-svc-bkcrl
Feb 15 01:11:58.616: INFO: Got endpoints: latency-svc-bkcrl [1.477026141s]
Feb 15 01:11:58.636: INFO: Created: latency-svc-zhgwd
Feb 15 01:11:58.693: INFO: Got endpoints: latency-svc-zhgwd [1.545167152s]
Feb 15 01:11:58.715: INFO: Created: latency-svc-vnq7x
Feb 15 01:11:58.724: INFO: Got endpoints: latency-svc-vnq7x [1.547596539s]
Feb 15 01:11:58.750: INFO: Created: latency-svc-wz27s
Feb 15 01:11:58.754: INFO: Got endpoints: latency-svc-wz27s [1.470947191s]
Feb 15 01:11:58.771: INFO: Created: latency-svc-cbjmq
Feb 15 01:11:58.787: INFO: Got endpoints: latency-svc-cbjmq [1.470595948s]
Feb 15 01:11:58.874: INFO: Created: latency-svc-4bjxp
Feb 15 01:11:58.914: INFO: Got endpoints: latency-svc-4bjxp [1.368686139s]
Feb 15 01:11:58.936: INFO: Created: latency-svc-zczgg
Feb 15 01:11:58.937: INFO: Got endpoints: latency-svc-zczgg [1.372098943s]
Feb 15 01:11:59.041: INFO: Created: latency-svc-gdgpd
Feb 15 01:11:59.059: INFO: Got endpoints: latency-svc-gdgpd [1.303436694s]
Feb 15 01:11:59.092: INFO: Created: latency-svc-khczg
Feb 15 01:11:59.097: INFO: Got endpoints: latency-svc-khczg [1.333659924s]
Feb 15 01:11:59.130: INFO: Created: latency-svc-vvdsp
Feb 15 01:11:59.193: INFO: Got endpoints: latency-svc-vvdsp [1.168106839s]
Feb 15 01:11:59.219: INFO: Created: latency-svc-729gn
Feb 15 01:11:59.259: INFO: Got endpoints: latency-svc-729gn [1.212371721s]
Feb 15 01:11:59.291: INFO: Created: latency-svc-spgd2
Feb 15 01:11:59.360: INFO: Got endpoints: latency-svc-spgd2 [1.181499815s]
Feb 15 01:11:59.363: INFO: Created: latency-svc-krgsb
Feb 15 01:11:59.395: INFO: Got endpoints: latency-svc-krgsb [1.010573086s]
Feb 15 01:11:59.400: INFO: Created: latency-svc-ltkh5
Feb 15 01:11:59.427: INFO: Got endpoints: latency-svc-ltkh5 [970.643362ms]
Feb 15 01:11:59.478: INFO: Created: latency-svc-4x7cc
Feb 15 01:11:59.485: INFO: Got endpoints: latency-svc-4x7cc [896.19855ms]
Feb 15 01:11:59.508: INFO: Created: latency-svc-rxchk
Feb 15 01:11:59.508: INFO: Got endpoints: latency-svc-rxchk [892.369935ms]
Feb 15 01:11:59.508: INFO: Latencies: [100.301002ms 183.643247ms 265.632488ms 278.474015ms 320.024194ms 478.003398ms 536.844725ms 566.650712ms 627.755687ms 639.972395ms 676.41088ms 683.514281ms 689.471146ms 698.595105ms 700.173454ms 700.267305ms 700.480493ms 722.543946ms 725.245328ms 727.407965ms 730.753796ms 737.791342ms 756.950022ms 759.006434ms 773.841023ms 786.370554ms 786.478354ms 790.947596ms 792.830001ms 792.948862ms 798.416828ms 798.614116ms 806.912258ms 816.354501ms 819.677543ms 825.3755ms 836.018139ms 838.689228ms 842.828246ms 844.218978ms 846.299031ms 847.706516ms 848.242061ms 853.354607ms 861.56819ms 869.308233ms 871.162306ms 871.390671ms 871.433849ms 872.730556ms 873.520523ms 876.34018ms 877.085707ms 877.554085ms 884.881614ms 885.703762ms 887.08937ms 890.896125ms 892.369935ms 896.19855ms 900.254188ms 908.573956ms 908.865344ms 910.387365ms 913.949538ms 913.963134ms 917.420642ms 919.909395ms 920.023173ms 921.123392ms 922.066039ms 927.153404ms 929.0145ms 930.078829ms 930.730724ms 934.142162ms 938.404436ms 940.625966ms 941.608556ms 941.63728ms 942.568882ms 945.070316ms 948.180728ms 948.473562ms 948.792923ms 949.11433ms 951.424879ms 952.42795ms 952.695142ms 953.501653ms 955.194641ms 955.361611ms 956.512293ms 959.988856ms 960.157785ms 961.197376ms 963.690419ms 965.150924ms 966.400004ms 966.966406ms 967.249703ms 969.431562ms 970.643362ms 973.766898ms 974.871399ms 977.699946ms 979.681878ms 979.754468ms 981.753691ms 982.837546ms 983.140986ms 985.425748ms 987.401528ms 992.400064ms 993.266633ms 994.741243ms 997.894271ms 999.441734ms 1.002082462s 1.006714617s 1.008934447s 1.010573086s 1.010597529s 1.011057197s 1.011205067s 1.01210987s 1.014511572s 1.015968028s 1.017050064s 1.01745843s 1.021047205s 1.024465673s 1.027314986s 1.039069034s 1.040695011s 1.043490874s 1.044099234s 1.052497274s 1.05519588s 1.055587874s 1.056986938s 1.0576281s 1.059413017s 1.06016603s 1.060520852s 1.062737023s 1.063505828s 1.068177233s 1.069711237s 1.075155063s 1.076653842s 1.081764544s 1.086122432s 1.086550587s 1.08707621s 1.104776241s 1.105171644s 1.107862586s 1.111083527s 1.124855466s 1.131026195s 1.131692541s 1.137711775s 1.138144137s 1.140235162s 1.152470734s 1.156145747s 1.168106839s 1.172377016s 1.178602708s 1.181197522s 1.181499815s 1.182637122s 1.18981997s 1.190362138s 1.199081278s 1.202491046s 1.212371721s 1.216871581s 1.222123605s 1.226888495s 1.235842274s 1.238877287s 1.255901693s 1.260693851s 1.270879476s 1.303436694s 1.333659924s 1.368686139s 1.372098943s 1.373279865s 1.425544225s 1.468065452s 1.470595948s 1.470947191s 1.477026141s 1.542782302s 1.545167152s 1.547596539s 1.588620651s]
Feb 15 01:11:59.509: INFO: 50 %ile: 967.249703ms
Feb 15 01:11:59.509: INFO: 90 %ile: 1.226888495s
Feb 15 01:11:59.509: INFO: 99 %ile: 1.547596539s
Feb 15 01:11:59.509: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:11:59.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-470" for this suite.

• [SLOW TEST:23.840 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":280,"completed":205,"skipped":3389,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:11:59.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 15 01:11:59.672: INFO: Waiting up to 5m0s for pod "downwardapi-volume-635acb21-5a59-4b61-9c99-dfac543f7661" in namespace "projected-122" to be "success or failure"
Feb 15 01:11:59.698: INFO: Pod "downwardapi-volume-635acb21-5a59-4b61-9c99-dfac543f7661": Phase="Pending", Reason="", readiness=false. Elapsed: 24.914248ms
Feb 15 01:12:01.707: INFO: Pod "downwardapi-volume-635acb21-5a59-4b61-9c99-dfac543f7661": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03432209s
Feb 15 01:12:03.715: INFO: Pod "downwardapi-volume-635acb21-5a59-4b61-9c99-dfac543f7661": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042004139s
Feb 15 01:12:05.756: INFO: Pod "downwardapi-volume-635acb21-5a59-4b61-9c99-dfac543f7661": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082861619s
Feb 15 01:12:07.829: INFO: Pod "downwardapi-volume-635acb21-5a59-4b61-9c99-dfac543f7661": Phase="Pending", Reason="", readiness=false. Elapsed: 8.155978669s
Feb 15 01:12:09.835: INFO: Pod "downwardapi-volume-635acb21-5a59-4b61-9c99-dfac543f7661": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.162333769s
STEP: Saw pod success
Feb 15 01:12:09.835: INFO: Pod "downwardapi-volume-635acb21-5a59-4b61-9c99-dfac543f7661" satisfied condition "success or failure"
Feb 15 01:12:09.841: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-635acb21-5a59-4b61-9c99-dfac543f7661 container client-container: 
STEP: delete the pod
Feb 15 01:12:09.910: INFO: Waiting for pod downwardapi-volume-635acb21-5a59-4b61-9c99-dfac543f7661 to disappear
Feb 15 01:12:09.950: INFO: Pod downwardapi-volume-635acb21-5a59-4b61-9c99-dfac543f7661 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:12:09.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-122" for this suite.

• [SLOW TEST:10.428 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":206,"skipped":3407,"failed":0}
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:12:09.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap configmap-6126/configmap-test-1e598417-cc9a-41ff-b86b-9792a831011b
STEP: Creating a pod to test consume configMaps
Feb 15 01:12:10.280: INFO: Waiting up to 5m0s for pod "pod-configmaps-9161748f-953f-47a8-8589-3a7683e5d082" in namespace "configmap-6126" to be "success or failure"
Feb 15 01:12:10.418: INFO: Pod "pod-configmaps-9161748f-953f-47a8-8589-3a7683e5d082": Phase="Pending", Reason="", readiness=false. Elapsed: 138.019411ms
Feb 15 01:12:12.488: INFO: Pod "pod-configmaps-9161748f-953f-47a8-8589-3a7683e5d082": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208000114s
Feb 15 01:12:14.501: INFO: Pod "pod-configmaps-9161748f-953f-47a8-8589-3a7683e5d082": Phase="Pending", Reason="", readiness=false. Elapsed: 4.221518445s
Feb 15 01:12:16.603: INFO: Pod "pod-configmaps-9161748f-953f-47a8-8589-3a7683e5d082": Phase="Pending", Reason="", readiness=false. Elapsed: 6.323607393s
Feb 15 01:12:18.615: INFO: Pod "pod-configmaps-9161748f-953f-47a8-8589-3a7683e5d082": Phase="Pending", Reason="", readiness=false. Elapsed: 8.334975584s
Feb 15 01:12:20.715: INFO: Pod "pod-configmaps-9161748f-953f-47a8-8589-3a7683e5d082": Phase="Pending", Reason="", readiness=false. Elapsed: 10.435673955s
Feb 15 01:12:22.758: INFO: Pod "pod-configmaps-9161748f-953f-47a8-8589-3a7683e5d082": Phase="Pending", Reason="", readiness=false. Elapsed: 12.478607673s
Feb 15 01:12:24.764: INFO: Pod "pod-configmaps-9161748f-953f-47a8-8589-3a7683e5d082": Phase="Pending", Reason="", readiness=false. Elapsed: 14.484377325s
Feb 15 01:12:26.792: INFO: Pod "pod-configmaps-9161748f-953f-47a8-8589-3a7683e5d082": Phase="Pending", Reason="", readiness=false. Elapsed: 16.512125869s
Feb 15 01:12:28.893: INFO: Pod "pod-configmaps-9161748f-953f-47a8-8589-3a7683e5d082": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.613681758s
STEP: Saw pod success
Feb 15 01:12:28.894: INFO: Pod "pod-configmaps-9161748f-953f-47a8-8589-3a7683e5d082" satisfied condition "success or failure"
Feb 15 01:12:28.922: INFO: Trying to get logs from node jerma-node pod pod-configmaps-9161748f-953f-47a8-8589-3a7683e5d082 container env-test: 
STEP: delete the pod
Feb 15 01:12:29.177: INFO: Waiting for pod pod-configmaps-9161748f-953f-47a8-8589-3a7683e5d082 to disappear
Feb 15 01:12:29.400: INFO: Pod pod-configmaps-9161748f-953f-47a8-8589-3a7683e5d082 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:12:29.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6126" for this suite.

• [SLOW TEST:19.440 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":280,"completed":207,"skipped":3408,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:12:29.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 01:12:30.006: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-6b56a4c4-4804-47e1-97c5-096ad5059fe1" in namespace "security-context-test-498" to be "success or failure"
Feb 15 01:12:30.097: INFO: Pod "alpine-nnp-false-6b56a4c4-4804-47e1-97c5-096ad5059fe1": Phase="Pending", Reason="", readiness=false. Elapsed: 90.531923ms
Feb 15 01:12:32.128: INFO: Pod "alpine-nnp-false-6b56a4c4-4804-47e1-97c5-096ad5059fe1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122208688s
Feb 15 01:12:34.163: INFO: Pod "alpine-nnp-false-6b56a4c4-4804-47e1-97c5-096ad5059fe1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156971267s
Feb 15 01:12:36.171: INFO: Pod "alpine-nnp-false-6b56a4c4-4804-47e1-97c5-096ad5059fe1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.165191125s
Feb 15 01:12:38.178: INFO: Pod "alpine-nnp-false-6b56a4c4-4804-47e1-97c5-096ad5059fe1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.172153831s
Feb 15 01:12:40.187: INFO: Pod "alpine-nnp-false-6b56a4c4-4804-47e1-97c5-096ad5059fe1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.181208592s
Feb 15 01:12:40.188: INFO: Pod "alpine-nnp-false-6b56a4c4-4804-47e1-97c5-096ad5059fe1" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:12:40.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-498" for this suite.

• [SLOW TEST:10.912 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":208,"skipped":3426,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:12:40.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 15 01:12:47.810: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:12:47.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9244" for this suite.

• [SLOW TEST:7.585 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":209,"skipped":3435,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:12:47.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 15 01:12:48.039: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5cd1e4c3-8392-4183-a5b2-15c1888a7c4f" in namespace "downward-api-7787" to be "success or failure"
Feb 15 01:12:48.058: INFO: Pod "downwardapi-volume-5cd1e4c3-8392-4183-a5b2-15c1888a7c4f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.959428ms
Feb 15 01:12:50.064: INFO: Pod "downwardapi-volume-5cd1e4c3-8392-4183-a5b2-15c1888a7c4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024287147s
Feb 15 01:12:52.083: INFO: Pod "downwardapi-volume-5cd1e4c3-8392-4183-a5b2-15c1888a7c4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043621981s
Feb 15 01:12:54.088: INFO: Pod "downwardapi-volume-5cd1e4c3-8392-4183-a5b2-15c1888a7c4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048766753s
Feb 15 01:12:56.094: INFO: Pod "downwardapi-volume-5cd1e4c3-8392-4183-a5b2-15c1888a7c4f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05445013s
Feb 15 01:12:58.100: INFO: Pod "downwardapi-volume-5cd1e4c3-8392-4183-a5b2-15c1888a7c4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060644506s
STEP: Saw pod success
Feb 15 01:12:58.100: INFO: Pod "downwardapi-volume-5cd1e4c3-8392-4183-a5b2-15c1888a7c4f" satisfied condition "success or failure"
Feb 15 01:12:58.103: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-5cd1e4c3-8392-4183-a5b2-15c1888a7c4f container client-container: 
STEP: delete the pod
Feb 15 01:12:58.172: INFO: Waiting for pod downwardapi-volume-5cd1e4c3-8392-4183-a5b2-15c1888a7c4f to disappear
Feb 15 01:12:58.180: INFO: Pod downwardapi-volume-5cd1e4c3-8392-4183-a5b2-15c1888a7c4f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:12:58.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7787" for this suite.

• [SLOW TEST:10.415 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":280,"completed":210,"skipped":3487,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:12:58.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service endpoint-test2 in namespace services-6805
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6805 to expose endpoints map[]
Feb 15 01:12:58.634: INFO: successfully validated that service endpoint-test2 in namespace services-6805 exposes endpoints map[] (17.755321ms elapsed)
STEP: Creating pod pod1 in namespace services-6805
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6805 to expose endpoints map[pod1:[80]]
Feb 15 01:13:02.894: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.142342209s elapsed, will retry)
Feb 15 01:13:05.928: INFO: successfully validated that service endpoint-test2 in namespace services-6805 exposes endpoints map[pod1:[80]] (7.176455842s elapsed)
STEP: Creating pod pod2 in namespace services-6805
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6805 to expose endpoints map[pod1:[80] pod2:[80]]
Feb 15 01:13:10.893: INFO: Unexpected endpoints: found map[717db353-e316-4fe6-a27f-11a834dc27e6:[80]], expected map[pod1:[80] pod2:[80]] (4.958439803s elapsed, will retry)
Feb 15 01:13:12.917: INFO: successfully validated that service endpoint-test2 in namespace services-6805 exposes endpoints map[pod1:[80] pod2:[80]] (6.982445343s elapsed)
STEP: Deleting pod pod1 in namespace services-6805
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6805 to expose endpoints map[pod2:[80]]
Feb 15 01:13:13.004: INFO: successfully validated that service endpoint-test2 in namespace services-6805 exposes endpoints map[pod2:[80]] (75.48502ms elapsed)
STEP: Deleting pod pod2 in namespace services-6805
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6805 to expose endpoints map[]
Feb 15 01:13:14.024: INFO: successfully validated that service endpoint-test2 in namespace services-6805 exposes endpoints map[] (1.014064107s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:13:14.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6805" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:15.764 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":280,"completed":211,"skipped":3491,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:13:14.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-upd-c746ae8d-4c2d-428c-90eb-d97221bfb0a9
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-c746ae8d-4c2d-428c-90eb-d97221bfb0a9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:13:26.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-383" for this suite.

• [SLOW TEST:12.327 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":212,"skipped":3492,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:13:26.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: validating api versions
Feb 15 01:13:26.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 15 01:13:26.795: INFO: stderr: ""
Feb 15 01:13:26.796: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:13:26.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-494" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":280,"completed":213,"skipped":3543,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:13:26.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 01:13:26.913: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 15 01:13:26.956: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 15 01:13:31.960: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 15 01:13:34.007: INFO: Creating deployment "test-rolling-update-deployment"
Feb 15 01:13:34.013: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 15 01:13:34.045: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 15 01:13:36.054: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 15 01:13:36.058: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326014, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326014, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326014, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326014, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:13:38.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326014, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326014, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326014, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326014, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:13:40.069: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326014, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326014, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326014, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326014, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:13:42.064: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 15 01:13:42.078: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-79 /apis/apps/v1/namespaces/deployment-79/deployments/test-rolling-update-deployment f807dda2-ff96-4196-a3fe-466654653162 8495392 1 2020-02-15 01:13:34 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004033c18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-15 01:13:34 +0000 UTC,LastTransitionTime:2020-02-15 01:13:34 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-02-15 01:13:41 +0000 UTC,LastTransitionTime:2020-02-15 01:13:34 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb 15 01:13:42.083: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-79 /apis/apps/v1/namespaces/deployment-79/replicasets/test-rolling-update-deployment-67cf4f6444 8c5d8b49-eda6-49ff-9dc9-1d7047b47e95 8495381 1 2020-02-15 01:13:34 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment f807dda2-ff96-4196-a3fe-466654653162 0xc0046521d7 0xc0046521d8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004652248  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 15 01:13:42.083: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 15 01:13:42.083: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-79 /apis/apps/v1/namespaces/deployment-79/replicasets/test-rolling-update-controller 560aa1f2-2207-4ce0-bf69-4d3494676dc4 8495390 2 2020-02-15 01:13:26 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment f807dda2-ff96-4196-a3fe-466654653162 0xc004033fcf 0xc004033fe0}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004652058  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 15 01:13:42.089: INFO: Pod "test-rolling-update-deployment-67cf4f6444-7kk5n" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-7kk5n test-rolling-update-deployment-67cf4f6444- deployment-79 /api/v1/namespaces/deployment-79/pods/test-rolling-update-deployment-67cf4f6444-7kk5n 5198027b-a72c-4ec8-ab7f-15090675e160 8495380 0 2020-02-15 01:13:34 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 8c5d8b49-eda6-49ff-9dc9-1d7047b47e95 0xc0046527d7 0xc0046527d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kz445,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kz445,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kz445,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 01:13:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 01:13:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 01:13:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-15 01:13:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-15 01:13:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-15 01:13:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://f97801bba11735f4e7490dbb4662be0b976426d92af55f73e9a38496fead5201,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:13:42.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-79" for this suite.

• [SLOW TEST:15.266 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":214,"skipped":3587,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:13:42.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 15 01:13:42.427: INFO: Number of nodes with available pods: 0
Feb 15 01:13:42.427: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:13:44.669: INFO: Number of nodes with available pods: 0
Feb 15 01:13:44.670: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:13:45.781: INFO: Number of nodes with available pods: 0
Feb 15 01:13:45.781: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:13:46.483: INFO: Number of nodes with available pods: 0
Feb 15 01:13:46.484: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:13:47.503: INFO: Number of nodes with available pods: 0
Feb 15 01:13:47.503: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:13:53.039: INFO: Number of nodes with available pods: 0
Feb 15 01:13:53.039: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:13:54.549: INFO: Number of nodes with available pods: 0
Feb 15 01:13:54.549: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:13:55.451: INFO: Number of nodes with available pods: 0
Feb 15 01:13:55.451: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:13:56.525: INFO: Number of nodes with available pods: 1
Feb 15 01:13:56.526: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:13:57.449: INFO: Number of nodes with available pods: 2
Feb 15 01:13:57.449: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 15 01:13:57.502: INFO: Number of nodes with available pods: 1
Feb 15 01:13:57.502: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:13:58.518: INFO: Number of nodes with available pods: 1
Feb 15 01:13:58.518: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:13:59.533: INFO: Number of nodes with available pods: 1
Feb 15 01:13:59.533: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:14:00.519: INFO: Number of nodes with available pods: 1
Feb 15 01:14:00.519: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:14:01.521: INFO: Number of nodes with available pods: 1
Feb 15 01:14:01.521: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:14:02.522: INFO: Number of nodes with available pods: 1
Feb 15 01:14:02.522: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:14:03.514: INFO: Number of nodes with available pods: 1
Feb 15 01:14:03.515: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:14:04.519: INFO: Number of nodes with available pods: 1
Feb 15 01:14:04.519: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:14:05.520: INFO: Number of nodes with available pods: 1
Feb 15 01:14:05.520: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:14:06.550: INFO: Number of nodes with available pods: 1
Feb 15 01:14:06.551: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:14:08.249: INFO: Number of nodes with available pods: 1
Feb 15 01:14:08.249: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:14:09.630: INFO: Number of nodes with available pods: 1
Feb 15 01:14:09.630: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:14:10.522: INFO: Number of nodes with available pods: 1
Feb 15 01:14:10.523: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:14:11.517: INFO: Number of nodes with available pods: 1
Feb 15 01:14:11.517: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:14:12.548: INFO: Number of nodes with available pods: 1
Feb 15 01:14:12.548: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:14:13.524: INFO: Number of nodes with available pods: 1
Feb 15 01:14:13.524: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:14:14.520: INFO: Number of nodes with available pods: 1
Feb 15 01:14:14.520: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:14:15.515: INFO: Number of nodes with available pods: 1
Feb 15 01:14:15.515: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:14:16.520: INFO: Number of nodes with available pods: 1
Feb 15 01:14:16.521: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:14:17.516: INFO: Number of nodes with available pods: 1
Feb 15 01:14:17.516: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:14:18.521: INFO: Number of nodes with available pods: 2
Feb 15 01:14:18.521: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8889, will wait for the garbage collector to delete the pods
Feb 15 01:14:18.594: INFO: Deleting DaemonSet.extensions daemon-set took: 12.117203ms
Feb 15 01:14:18.995: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.859858ms
Feb 15 01:14:33.201: INFO: Number of nodes with available pods: 0
Feb 15 01:14:33.202: INFO: Number of running nodes: 0, number of available pods: 0
Feb 15 01:14:33.206: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8889/daemonsets","resourceVersion":"8495582"},"items":null}

Feb 15 01:14:33.210: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8889/pods","resourceVersion":"8495582"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:14:33.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8889" for this suite.

• [SLOW TEST:51.152 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":280,"completed":215,"skipped":3597,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:14:33.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 15 01:14:33.328: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d7f3666e-4e8c-4c60-8484-b20aad533c99" in namespace "downward-api-190" to be "success or failure"
Feb 15 01:14:33.338: INFO: Pod "downwardapi-volume-d7f3666e-4e8c-4c60-8484-b20aad533c99": Phase="Pending", Reason="", readiness=false. Elapsed: 9.479921ms
Feb 15 01:14:35.347: INFO: Pod "downwardapi-volume-d7f3666e-4e8c-4c60-8484-b20aad533c99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018360604s
Feb 15 01:14:37.359: INFO: Pod "downwardapi-volume-d7f3666e-4e8c-4c60-8484-b20aad533c99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030778161s
Feb 15 01:14:39.374: INFO: Pod "downwardapi-volume-d7f3666e-4e8c-4c60-8484-b20aad533c99": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045863234s
Feb 15 01:14:41.390: INFO: Pod "downwardapi-volume-d7f3666e-4e8c-4c60-8484-b20aad533c99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061064484s
STEP: Saw pod success
Feb 15 01:14:41.390: INFO: Pod "downwardapi-volume-d7f3666e-4e8c-4c60-8484-b20aad533c99" satisfied condition "success or failure"
Feb 15 01:14:41.394: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d7f3666e-4e8c-4c60-8484-b20aad533c99 container client-container: 
STEP: delete the pod
Feb 15 01:14:41.458: INFO: Waiting for pod downwardapi-volume-d7f3666e-4e8c-4c60-8484-b20aad533c99 to disappear
Feb 15 01:14:41.465: INFO: Pod downwardapi-volume-d7f3666e-4e8c-4c60-8484-b20aad533c99 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:14:41.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-190" for this suite.

• [SLOW TEST:8.230 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":216,"skipped":3602,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:14:41.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-72ea0225-6568-4bf5-8496-8610e778f12b
STEP: Creating a pod to test consume secrets
Feb 15 01:14:41.769: INFO: Waiting up to 5m0s for pod "pod-secrets-ef4416d3-f8e1-48cc-8fc5-98bbef6c9f0d" in namespace "secrets-8065" to be "success or failure"
Feb 15 01:14:41.783: INFO: Pod "pod-secrets-ef4416d3-f8e1-48cc-8fc5-98bbef6c9f0d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.76593ms
Feb 15 01:14:43.793: INFO: Pod "pod-secrets-ef4416d3-f8e1-48cc-8fc5-98bbef6c9f0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023582634s
Feb 15 01:14:45.801: INFO: Pod "pod-secrets-ef4416d3-f8e1-48cc-8fc5-98bbef6c9f0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031393357s
Feb 15 01:14:47.810: INFO: Pod "pod-secrets-ef4416d3-f8e1-48cc-8fc5-98bbef6c9f0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040392045s
Feb 15 01:14:49.819: INFO: Pod "pod-secrets-ef4416d3-f8e1-48cc-8fc5-98bbef6c9f0d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049881s
Feb 15 01:14:51.831: INFO: Pod "pod-secrets-ef4416d3-f8e1-48cc-8fc5-98bbef6c9f0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061239158s
STEP: Saw pod success
Feb 15 01:14:51.831: INFO: Pod "pod-secrets-ef4416d3-f8e1-48cc-8fc5-98bbef6c9f0d" satisfied condition "success or failure"
Feb 15 01:14:51.837: INFO: Trying to get logs from node jerma-node pod pod-secrets-ef4416d3-f8e1-48cc-8fc5-98bbef6c9f0d container secret-volume-test: 
STEP: delete the pod
Feb 15 01:14:52.160: INFO: Waiting for pod pod-secrets-ef4416d3-f8e1-48cc-8fc5-98bbef6c9f0d to disappear
Feb 15 01:14:52.173: INFO: Pod pod-secrets-ef4416d3-f8e1-48cc-8fc5-98bbef6c9f0d no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:14:52.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8065" for this suite.

• [SLOW TEST:10.704 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":217,"skipped":3641,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:14:52.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0215 01:15:05.206746      10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 15 01:15:05.206: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:15:05.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7916" for this suite.

• [SLOW TEST:14.605 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":280,"completed":218,"skipped":3648,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:15:06.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 15 01:15:13.873: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 15 01:15:15.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326114, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:15:19.202: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326114, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:15:20.298: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326114, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:15:21.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326114, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:15:24.612: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326114, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:15:26.443: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326114, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:15:27.987: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326114, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:15:29.987: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326114, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717326113, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 15 01:15:33.159: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:15:33.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-431" for this suite.
STEP: Destroying namespace "webhook-431-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:26.582 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":280,"completed":219,"skipped":3673,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:15:33.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 15 01:15:33.483: INFO: Waiting up to 5m0s for pod "pod-ba7547c4-dc42-4eb6-ab97-77ddfce7c405" in namespace "emptydir-8448" to be "success or failure"
Feb 15 01:15:33.511: INFO: Pod "pod-ba7547c4-dc42-4eb6-ab97-77ddfce7c405": Phase="Pending", Reason="", readiness=false. Elapsed: 27.130673ms
Feb 15 01:15:35.521: INFO: Pod "pod-ba7547c4-dc42-4eb6-ab97-77ddfce7c405": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037246129s
Feb 15 01:15:37.528: INFO: Pod "pod-ba7547c4-dc42-4eb6-ab97-77ddfce7c405": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04437898s
Feb 15 01:15:39.535: INFO: Pod "pod-ba7547c4-dc42-4eb6-ab97-77ddfce7c405": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051680475s
Feb 15 01:15:41.544: INFO: Pod "pod-ba7547c4-dc42-4eb6-ab97-77ddfce7c405": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060350633s
Feb 15 01:15:43.554: INFO: Pod "pod-ba7547c4-dc42-4eb6-ab97-77ddfce7c405": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070148855s
STEP: Saw pod success
Feb 15 01:15:43.554: INFO: Pod "pod-ba7547c4-dc42-4eb6-ab97-77ddfce7c405" satisfied condition "success or failure"
Feb 15 01:15:43.571: INFO: Trying to get logs from node jerma-node pod pod-ba7547c4-dc42-4eb6-ab97-77ddfce7c405 container test-container: 
STEP: delete the pod
Feb 15 01:15:43.632: INFO: Waiting for pod pod-ba7547c4-dc42-4eb6-ab97-77ddfce7c405 to disappear
Feb 15 01:15:43.679: INFO: Pod pod-ba7547c4-dc42-4eb6-ab97-77ddfce7c405 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:15:43.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8448" for this suite.

• [SLOW TEST:10.315 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":220,"skipped":3675,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:15:43.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name s-test-opt-del-3c83bd9e-3aa1-4903-98a2-f875112d7040
STEP: Creating secret with name s-test-opt-upd-c82d537d-5a36-4adb-8c11-15ecef26839d
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-3c83bd9e-3aa1-4903-98a2-f875112d7040
STEP: Updating secret s-test-opt-upd-c82d537d-5a36-4adb-8c11-15ecef26839d
STEP: Creating secret with name s-test-opt-create-9a8a018e-5a63-42c4-9439-9763890e2445
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:17:05.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5748" for this suite.

• [SLOW TEST:81.769 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":221,"skipped":3696,"failed":0}
SSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:17:05.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating pod
Feb 15 01:17:15.752: INFO: Pod pod-hostip-6b4613a8-7b8b-4995-ba58-6d73a2a0bbab has hostIP: 10.96.2.250
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:17:15.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9262" for this suite.

• [SLOW TEST:10.302 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":280,"completed":222,"skipped":3702,"failed":0}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:17:15.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-b84d5032-960a-4a84-ba23-15229427c661
STEP: Creating a pod to test consume configMaps
Feb 15 01:17:15.994: INFO: Waiting up to 5m0s for pod "pod-configmaps-fa34201e-70a1-4fbf-b277-8129a2389f9f" in namespace "configmap-3643" to be "success or failure"
Feb 15 01:17:16.127: INFO: Pod "pod-configmaps-fa34201e-70a1-4fbf-b277-8129a2389f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 132.531574ms
Feb 15 01:17:18.924: INFO: Pod "pod-configmaps-fa34201e-70a1-4fbf-b277-8129a2389f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.929545426s
Feb 15 01:17:21.411: INFO: Pod "pod-configmaps-fa34201e-70a1-4fbf-b277-8129a2389f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.416798751s
Feb 15 01:17:23.467: INFO: Pod "pod-configmaps-fa34201e-70a1-4fbf-b277-8129a2389f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.472630599s
Feb 15 01:17:25.480: INFO: Pod "pod-configmaps-fa34201e-70a1-4fbf-b277-8129a2389f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.485448571s
Feb 15 01:17:27.488: INFO: Pod "pod-configmaps-fa34201e-70a1-4fbf-b277-8129a2389f9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.493479347s
STEP: Saw pod success
Feb 15 01:17:27.488: INFO: Pod "pod-configmaps-fa34201e-70a1-4fbf-b277-8129a2389f9f" satisfied condition "success or failure"
Feb 15 01:17:27.492: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-configmaps-fa34201e-70a1-4fbf-b277-8129a2389f9f container configmap-volume-test: 
STEP: delete the pod
Feb 15 01:17:27.545: INFO: Waiting for pod pod-configmaps-fa34201e-70a1-4fbf-b277-8129a2389f9f to disappear
Feb 15 01:17:27.552: INFO: Pod pod-configmaps-fa34201e-70a1-4fbf-b277-8129a2389f9f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:17:27.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3643" for this suite.

• [SLOW TEST:11.801 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":223,"skipped":3707,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:17:27.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Feb 15 01:17:27.667: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:17:41.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2337" for this suite.

• [SLOW TEST:15.370 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":280,"completed":224,"skipped":3721,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:17:42.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:18:21.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4914" for this suite.
STEP: Destroying namespace "nsdeletetest-6635" for this suite.
Feb 15 01:18:21.836: INFO: Namespace nsdeletetest-6635 was already deleted
STEP: Destroying namespace "nsdeletetest-4008" for this suite.

• [SLOW TEST:38.906 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":280,"completed":225,"skipped":3751,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:18:21.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1735
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 15 01:18:21.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-5208'
Feb 15 01:18:22.159: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 15 01:18:22.160: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1740
Feb 15 01:18:24.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5208'
Feb 15 01:18:24.339: INFO: stderr: ""
Feb 15 01:18:24.339: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:18:24.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5208" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":280,"completed":226,"skipped":3765,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:18:24.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5142.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5142.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5142.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5142.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5142.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5142.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5142.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5142.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5142.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5142.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 15 01:18:38.624: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:38.632: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:38.640: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:38.645: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:38.660: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:38.665: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:38.669: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:38.674: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:38.682: INFO: Lookups using dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5142.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5142.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local jessie_udp@dns-test-service-2.dns-5142.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5142.svc.cluster.local]

Feb 15 01:18:43.693: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:43.701: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:43.706: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:43.716: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:43.746: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:43.750: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:43.754: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:43.759: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:43.768: INFO: Lookups using dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5142.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5142.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local jessie_udp@dns-test-service-2.dns-5142.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5142.svc.cluster.local]

Feb 15 01:18:48.691: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:48.700: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:48.705: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:48.710: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:48.722: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:48.726: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:48.729: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:48.733: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:48.748: INFO: Lookups using dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5142.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5142.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local jessie_udp@dns-test-service-2.dns-5142.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5142.svc.cluster.local]

Feb 15 01:18:53.695: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:53.701: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:53.707: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:53.713: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:53.727: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:53.732: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:53.737: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:53.741: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:53.750: INFO: Lookups using dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5142.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5142.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local jessie_udp@dns-test-service-2.dns-5142.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5142.svc.cluster.local]

Feb 15 01:18:58.699: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:58.714: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:58.723: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:58.733: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:58.752: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:58.756: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:58.760: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:58.766: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:18:58.775: INFO: Lookups using dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5142.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5142.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local jessie_udp@dns-test-service-2.dns-5142.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5142.svc.cluster.local]

Feb 15 01:19:03.713: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:19:03.726: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:19:03.730: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:19:03.733: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:19:03.743: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:19:03.747: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:19:03.752: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:19:03.764: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5142.svc.cluster.local from pod dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956: the server could not find the requested resource (get pods dns-test-94eff2d7-719b-4565-9345-42911b25f956)
Feb 15 01:19:03.777: INFO: Lookups using dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5142.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5142.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5142.svc.cluster.local jessie_udp@dns-test-service-2.dns-5142.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5142.svc.cluster.local]

Feb 15 01:19:08.763: INFO: DNS probes using dns-5142/dns-test-94eff2d7-719b-4565-9345-42911b25f956 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:19:08.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5142" for this suite.

• [SLOW TEST:44.583 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":280,"completed":227,"skipped":3771,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:19:08.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 15 01:19:09.063: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a41be41-21a5-4264-9e8f-d67984478ede" in namespace "projected-9286" to be "success or failure"
Feb 15 01:19:09.139: INFO: Pod "downwardapi-volume-3a41be41-21a5-4264-9e8f-d67984478ede": Phase="Pending", Reason="", readiness=false. Elapsed: 75.459227ms
Feb 15 01:19:11.143: INFO: Pod "downwardapi-volume-3a41be41-21a5-4264-9e8f-d67984478ede": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079958573s
Feb 15 01:19:13.151: INFO: Pod "downwardapi-volume-3a41be41-21a5-4264-9e8f-d67984478ede": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087502565s
Feb 15 01:19:15.157: INFO: Pod "downwardapi-volume-3a41be41-21a5-4264-9e8f-d67984478ede": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093683189s
Feb 15 01:19:17.179: INFO: Pod "downwardapi-volume-3a41be41-21a5-4264-9e8f-d67984478ede": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115365429s
Feb 15 01:19:19.186: INFO: Pod "downwardapi-volume-3a41be41-21a5-4264-9e8f-d67984478ede": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.122061531s
STEP: Saw pod success
Feb 15 01:19:19.186: INFO: Pod "downwardapi-volume-3a41be41-21a5-4264-9e8f-d67984478ede" satisfied condition "success or failure"
Feb 15 01:19:19.189: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-3a41be41-21a5-4264-9e8f-d67984478ede container client-container: 
STEP: delete the pod
Feb 15 01:19:19.252: INFO: Waiting for pod downwardapi-volume-3a41be41-21a5-4264-9e8f-d67984478ede to disappear
Feb 15 01:19:19.266: INFO: Pod downwardapi-volume-3a41be41-21a5-4264-9e8f-d67984478ede no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:19:19.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9286" for this suite.

• [SLOW TEST:10.346 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":228,"skipped":3773,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:19:19.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 15 01:19:28.080: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7879cd50-8ad5-4c21-afee-e1dfdc6e5e3a"
Feb 15 01:19:28.080: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7879cd50-8ad5-4c21-afee-e1dfdc6e5e3a" in namespace "pods-7696" to be "terminated due to deadline exceeded"
Feb 15 01:19:28.130: INFO: Pod "pod-update-activedeadlineseconds-7879cd50-8ad5-4c21-afee-e1dfdc6e5e3a": Phase="Running", Reason="", readiness=true. Elapsed: 50.105614ms
Feb 15 01:19:30.136: INFO: Pod "pod-update-activedeadlineseconds-7879cd50-8ad5-4c21-afee-e1dfdc6e5e3a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.055839691s
Feb 15 01:19:30.136: INFO: Pod "pod-update-activedeadlineseconds-7879cd50-8ad5-4c21-afee-e1dfdc6e5e3a" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:19:30.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7696" for this suite.

• [SLOW TEST:10.877 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":280,"completed":229,"skipped":3817,"failed":0}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:19:30.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-0ae75776-03f1-4667-b61a-d5f63f3ff7b0
STEP: Creating a pod to test consume configMaps
Feb 15 01:19:30.324: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-53089ca7-0e4b-4ec4-9a53-ec87a1841ebb" in namespace "projected-2704" to be "success or failure"
Feb 15 01:19:30.334: INFO: Pod "pod-projected-configmaps-53089ca7-0e4b-4ec4-9a53-ec87a1841ebb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.349644ms
Feb 15 01:19:32.341: INFO: Pod "pod-projected-configmaps-53089ca7-0e4b-4ec4-9a53-ec87a1841ebb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016247524s
Feb 15 01:19:34.352: INFO: Pod "pod-projected-configmaps-53089ca7-0e4b-4ec4-9a53-ec87a1841ebb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0274877s
Feb 15 01:19:36.360: INFO: Pod "pod-projected-configmaps-53089ca7-0e4b-4ec4-9a53-ec87a1841ebb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035178993s
Feb 15 01:19:38.551: INFO: Pod "pod-projected-configmaps-53089ca7-0e4b-4ec4-9a53-ec87a1841ebb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.226445809s
Feb 15 01:19:40.562: INFO: Pod "pod-projected-configmaps-53089ca7-0e4b-4ec4-9a53-ec87a1841ebb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.237016165s
Feb 15 01:19:42.573: INFO: Pod "pod-projected-configmaps-53089ca7-0e4b-4ec4-9a53-ec87a1841ebb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.248038167s
STEP: Saw pod success
Feb 15 01:19:42.573: INFO: Pod "pod-projected-configmaps-53089ca7-0e4b-4ec4-9a53-ec87a1841ebb" satisfied condition "success or failure"
Feb 15 01:19:42.577: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-53089ca7-0e4b-4ec4-9a53-ec87a1841ebb container projected-configmap-volume-test: 
STEP: delete the pod
Feb 15 01:19:43.124: INFO: Waiting for pod pod-projected-configmaps-53089ca7-0e4b-4ec4-9a53-ec87a1841ebb to disappear
Feb 15 01:19:43.137: INFO: Pod pod-projected-configmaps-53089ca7-0e4b-4ec4-9a53-ec87a1841ebb no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:19:43.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2704" for this suite.

• [SLOW TEST:12.996 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":230,"skipped":3818,"failed":0}
SS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:19:43.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2550.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2550.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2550.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2550.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2550.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2550.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 15 01:19:55.544: INFO: DNS probes using dns-2550/dns-test-91b424b5-2378-4f26-b7c4-d9fff533ee8e succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:19:55.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2550" for this suite.

• [SLOW TEST:12.584 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":280,"completed":231,"skipped":3820,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:19:55.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Feb 15 01:19:55.983: INFO: Waiting up to 5m0s for pod "downward-api-fe0cfffe-631f-439e-889d-70241889af6c" in namespace "downward-api-2600" to be "success or failure"
Feb 15 01:19:56.025: INFO: Pod "downward-api-fe0cfffe-631f-439e-889d-70241889af6c": Phase="Pending", Reason="", readiness=false. Elapsed: 41.309529ms
Feb 15 01:19:58.030: INFO: Pod "downward-api-fe0cfffe-631f-439e-889d-70241889af6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046532427s
Feb 15 01:20:00.039: INFO: Pod "downward-api-fe0cfffe-631f-439e-889d-70241889af6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054870212s
Feb 15 01:20:02.050: INFO: Pod "downward-api-fe0cfffe-631f-439e-889d-70241889af6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066029367s
Feb 15 01:20:04.108: INFO: Pod "downward-api-fe0cfffe-631f-439e-889d-70241889af6c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124221342s
Feb 15 01:20:06.114: INFO: Pod "downward-api-fe0cfffe-631f-439e-889d-70241889af6c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.130564149s
Feb 15 01:20:08.120: INFO: Pod "downward-api-fe0cfffe-631f-439e-889d-70241889af6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.136344309s
STEP: Saw pod success
Feb 15 01:20:08.120: INFO: Pod "downward-api-fe0cfffe-631f-439e-889d-70241889af6c" satisfied condition "success or failure"
Feb 15 01:20:08.124: INFO: Trying to get logs from node jerma-node pod downward-api-fe0cfffe-631f-439e-889d-70241889af6c container dapi-container: 
STEP: delete the pod
Feb 15 01:20:08.166: INFO: Waiting for pod downward-api-fe0cfffe-631f-439e-889d-70241889af6c to disappear
Feb 15 01:20:08.186: INFO: Pod downward-api-fe0cfffe-631f-439e-889d-70241889af6c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:20:08.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2600" for this suite.

• [SLOW TEST:12.455 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":280,"completed":232,"skipped":3863,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:20:08.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:20:18.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1564" for this suite.

• [SLOW TEST:10.295 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":280,"completed":233,"skipped":3876,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:20:18.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:20:35.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2059" for this suite.

• [SLOW TEST:17.199 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":280,"completed":234,"skipped":3894,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:20:35.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: set up a multi version CRD
Feb 15 01:20:35.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:20:50.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3670" for this suite.

• [SLOW TEST:15.249 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":280,"completed":235,"skipped":3916,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:20:50.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:20:51.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9849" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":280,"completed":236,"skipped":3928,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:20:51.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Feb 15 01:20:51.234: INFO: Waiting up to 5m0s for pod "downward-api-e86b5176-3cd6-4d84-9f0e-79f780de5d60" in namespace "downward-api-1579" to be "success or failure"
Feb 15 01:20:51.257: INFO: Pod "downward-api-e86b5176-3cd6-4d84-9f0e-79f780de5d60": Phase="Pending", Reason="", readiness=false. Elapsed: 22.61307ms
Feb 15 01:20:53.263: INFO: Pod "downward-api-e86b5176-3cd6-4d84-9f0e-79f780de5d60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028882093s
Feb 15 01:20:55.270: INFO: Pod "downward-api-e86b5176-3cd6-4d84-9f0e-79f780de5d60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035459343s
Feb 15 01:20:57.288: INFO: Pod "downward-api-e86b5176-3cd6-4d84-9f0e-79f780de5d60": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053841983s
Feb 15 01:20:59.295: INFO: Pod "downward-api-e86b5176-3cd6-4d84-9f0e-79f780de5d60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060703982s
STEP: Saw pod success
Feb 15 01:20:59.295: INFO: Pod "downward-api-e86b5176-3cd6-4d84-9f0e-79f780de5d60" satisfied condition "success or failure"
Feb 15 01:20:59.299: INFO: Trying to get logs from node jerma-node pod downward-api-e86b5176-3cd6-4d84-9f0e-79f780de5d60 container dapi-container: 
STEP: delete the pod
Feb 15 01:20:59.368: INFO: Waiting for pod downward-api-e86b5176-3cd6-4d84-9f0e-79f780de5d60 to disappear
Feb 15 01:20:59.533: INFO: Pod downward-api-e86b5176-3cd6-4d84-9f0e-79f780de5d60 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:20:59.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1579" for this suite.

• [SLOW TEST:8.398 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":280,"completed":237,"skipped":3947,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:20:59.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-e6afeae0-e243-4ebc-bfdf-de91d8e95f0e
STEP: Creating a pod to test consume secrets
Feb 15 01:20:59.738: INFO: Waiting up to 5m0s for pod "pod-secrets-98322075-c72f-44f9-84ee-3a0681ac8c96" in namespace "secrets-7724" to be "success or failure"
Feb 15 01:20:59.754: INFO: Pod "pod-secrets-98322075-c72f-44f9-84ee-3a0681ac8c96": Phase="Pending", Reason="", readiness=false. Elapsed: 15.543513ms
Feb 15 01:21:01.760: INFO: Pod "pod-secrets-98322075-c72f-44f9-84ee-3a0681ac8c96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021603229s
Feb 15 01:21:03.769: INFO: Pod "pod-secrets-98322075-c72f-44f9-84ee-3a0681ac8c96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030172008s
Feb 15 01:21:05.776: INFO: Pod "pod-secrets-98322075-c72f-44f9-84ee-3a0681ac8c96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037710288s
Feb 15 01:21:07.784: INFO: Pod "pod-secrets-98322075-c72f-44f9-84ee-3a0681ac8c96": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045772963s
Feb 15 01:21:09.793: INFO: Pod "pod-secrets-98322075-c72f-44f9-84ee-3a0681ac8c96": Phase="Pending", Reason="", readiness=false. Elapsed: 10.054203756s
Feb 15 01:21:11.803: INFO: Pod "pod-secrets-98322075-c72f-44f9-84ee-3a0681ac8c96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.064487324s
STEP: Saw pod success
Feb 15 01:21:11.803: INFO: Pod "pod-secrets-98322075-c72f-44f9-84ee-3a0681ac8c96" satisfied condition "success or failure"
Feb 15 01:21:11.808: INFO: Trying to get logs from node jerma-node pod pod-secrets-98322075-c72f-44f9-84ee-3a0681ac8c96 container secret-env-test: 
STEP: delete the pod
Feb 15 01:21:11.875: INFO: Waiting for pod pod-secrets-98322075-c72f-44f9-84ee-3a0681ac8c96 to disappear
Feb 15 01:21:11.978: INFO: Pod pod-secrets-98322075-c72f-44f9-84ee-3a0681ac8c96 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:21:11.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7724" for this suite.

• [SLOW TEST:12.444 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":280,"completed":238,"skipped":3961,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:21:12.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 01:21:12.304: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 15 01:21:12.321: INFO: Number of nodes with available pods: 0
Feb 15 01:21:12.321: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 15 01:21:12.487: INFO: Number of nodes with available pods: 0
Feb 15 01:21:12.487: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:13.498: INFO: Number of nodes with available pods: 0
Feb 15 01:21:13.498: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:14.496: INFO: Number of nodes with available pods: 0
Feb 15 01:21:14.497: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:15.493: INFO: Number of nodes with available pods: 0
Feb 15 01:21:15.493: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:16.500: INFO: Number of nodes with available pods: 0
Feb 15 01:21:16.500: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:17.507: INFO: Number of nodes with available pods: 0
Feb 15 01:21:17.507: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:18.511: INFO: Number of nodes with available pods: 0
Feb 15 01:21:18.511: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:19.505: INFO: Number of nodes with available pods: 1
Feb 15 01:21:19.505: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 15 01:21:19.598: INFO: Number of nodes with available pods: 1
Feb 15 01:21:19.598: INFO: Number of running nodes: 0, number of available pods: 1
Feb 15 01:21:20.607: INFO: Number of nodes with available pods: 0
Feb 15 01:21:20.607: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 15 01:21:20.636: INFO: Number of nodes with available pods: 0
Feb 15 01:21:20.636: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:21.645: INFO: Number of nodes with available pods: 0
Feb 15 01:21:21.645: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:22.653: INFO: Number of nodes with available pods: 0
Feb 15 01:21:22.654: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:23.647: INFO: Number of nodes with available pods: 0
Feb 15 01:21:23.647: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:24.640: INFO: Number of nodes with available pods: 0
Feb 15 01:21:24.640: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:25.645: INFO: Number of nodes with available pods: 0
Feb 15 01:21:25.645: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:26.644: INFO: Number of nodes with available pods: 0
Feb 15 01:21:26.645: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:27.643: INFO: Number of nodes with available pods: 0
Feb 15 01:21:27.643: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:28.653: INFO: Number of nodes with available pods: 0
Feb 15 01:21:28.653: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:29.643: INFO: Number of nodes with available pods: 0
Feb 15 01:21:29.643: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:30.648: INFO: Number of nodes with available pods: 0
Feb 15 01:21:30.649: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:31.647: INFO: Number of nodes with available pods: 0
Feb 15 01:21:31.647: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:32.650: INFO: Number of nodes with available pods: 0
Feb 15 01:21:32.650: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:33.781: INFO: Number of nodes with available pods: 0
Feb 15 01:21:33.781: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:34.642: INFO: Number of nodes with available pods: 0
Feb 15 01:21:34.642: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:35.641: INFO: Number of nodes with available pods: 0
Feb 15 01:21:35.642: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:36.648: INFO: Number of nodes with available pods: 0
Feb 15 01:21:36.648: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:37.997: INFO: Number of nodes with available pods: 0
Feb 15 01:21:37.997: INFO: Node jerma-node is running more than one daemon pod
Feb 15 01:21:38.653: INFO: Number of nodes with available pods: 1
Feb 15 01:21:38.654: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7106, will wait for the garbage collector to delete the pods
Feb 15 01:21:38.737: INFO: Deleting DaemonSet.extensions daemon-set took: 13.852753ms
Feb 15 01:21:39.138: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.80469ms
Feb 15 01:21:52.443: INFO: Number of nodes with available pods: 0
Feb 15 01:21:52.443: INFO: Number of running nodes: 0, number of available pods: 0
Feb 15 01:21:52.447: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7106/daemonsets","resourceVersion":"8497437"},"items":null}

Feb 15 01:21:52.449: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7106/pods","resourceVersion":"8497437"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:21:52.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7106" for this suite.

• [SLOW TEST:40.519 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":280,"completed":239,"skipped":3970,"failed":0}
SSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:21:52.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 01:21:52.662: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-24ca3b8d-7203-414b-8251-a3aaeb822401" in namespace "security-context-test-5070" to be "success or failure"
Feb 15 01:21:52.675: INFO: Pod "busybox-readonly-false-24ca3b8d-7203-414b-8251-a3aaeb822401": Phase="Pending", Reason="", readiness=false. Elapsed: 12.729636ms
Feb 15 01:21:54.696: INFO: Pod "busybox-readonly-false-24ca3b8d-7203-414b-8251-a3aaeb822401": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033286528s
Feb 15 01:21:56.705: INFO: Pod "busybox-readonly-false-24ca3b8d-7203-414b-8251-a3aaeb822401": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041826788s
Feb 15 01:21:58.718: INFO: Pod "busybox-readonly-false-24ca3b8d-7203-414b-8251-a3aaeb822401": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055384441s
Feb 15 01:22:00.727: INFO: Pod "busybox-readonly-false-24ca3b8d-7203-414b-8251-a3aaeb822401": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.064520601s
Feb 15 01:22:00.727: INFO: Pod "busybox-readonly-false-24ca3b8d-7203-414b-8251-a3aaeb822401" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:22:00.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5070" for this suite.

• [SLOW TEST:8.208 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":280,"completed":240,"skipped":3975,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:22:00.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 15 01:22:17.068: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 15 01:22:17.093: INFO: Pod pod-with-poststart-http-hook still exists
Feb 15 01:22:19.094: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 15 01:22:19.100: INFO: Pod pod-with-poststart-http-hook still exists
Feb 15 01:22:21.094: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 15 01:22:21.117: INFO: Pod pod-with-poststart-http-hook still exists
Feb 15 01:22:23.094: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 15 01:22:23.099: INFO: Pod pod-with-poststart-http-hook still exists
Feb 15 01:22:25.094: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 15 01:22:25.117: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:22:25.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4199" for this suite.

• [SLOW TEST:24.387 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":280,"completed":241,"skipped":3986,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:22:25.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 15 01:22:25.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-578'
Feb 15 01:22:27.224: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 15 01:22:27.224: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Feb 15 01:22:27.249: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-knnzh]
Feb 15 01:22:27.249: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-knnzh" in namespace "kubectl-578" to be "running and ready"
Feb 15 01:22:27.253: INFO: Pod "e2e-test-httpd-rc-knnzh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.540104ms
Feb 15 01:22:29.262: INFO: Pod "e2e-test-httpd-rc-knnzh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01303843s
Feb 15 01:22:31.268: INFO: Pod "e2e-test-httpd-rc-knnzh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0197283s
Feb 15 01:22:33.280: INFO: Pod "e2e-test-httpd-rc-knnzh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031647575s
Feb 15 01:22:35.287: INFO: Pod "e2e-test-httpd-rc-knnzh": Phase="Running", Reason="", readiness=true. Elapsed: 8.038212435s
Feb 15 01:22:35.287: INFO: Pod "e2e-test-httpd-rc-knnzh" satisfied condition "running and ready"
Feb 15 01:22:35.287: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-knnzh]
Feb 15 01:22:35.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-578'
Feb 15 01:22:35.492: INFO: stderr: ""
Feb 15 01:22:35.492: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.2. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.2. Set the 'ServerName' directive globally to suppress this message\n[Sat Feb 15 01:22:34.434011 2020] [mpm_event:notice] [pid 1:tid 140593557359464] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat Feb 15 01:22:34.434094 2020] [core:notice] [pid 1:tid 140593557359464] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1639
Feb 15 01:22:35.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-578'
Feb 15 01:22:35.609: INFO: stderr: ""
Feb 15 01:22:35.609: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:22:35.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-578" for this suite.

• [SLOW TEST:10.511 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1630
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":280,"completed":242,"skipped":3995,"failed":0}
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:22:35.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service externalname-service with the type=ExternalName in namespace services-5142
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-5142
I0215 01:22:35.877007      10 runners.go:189] Created replication controller with name: externalname-service, namespace: services-5142, replica count: 2
I0215 01:22:38.928090      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:22:41.928782      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:22:44.929358      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:22:47.929760      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 01:22:50.931155      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 15 01:22:50.931: INFO: Creating new exec pod
Feb 15 01:22:59.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5142 execpodcs7ld -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Feb 15 01:23:00.453: INFO: stderr: "I0215 01:23:00.252532    4704 log.go:172] (0xc000ac4d10) (0xc0006ee960) Create stream\nI0215 01:23:00.252666    4704 log.go:172] (0xc000ac4d10) (0xc0006ee960) Stream added, broadcasting: 1\nI0215 01:23:00.257179    4704 log.go:172] (0xc000ac4d10) Reply frame received for 1\nI0215 01:23:00.257242    4704 log.go:172] (0xc000ac4d10) (0xc000740000) Create stream\nI0215 01:23:00.257255    4704 log.go:172] (0xc000ac4d10) (0xc000740000) Stream added, broadcasting: 3\nI0215 01:23:00.258111    4704 log.go:172] (0xc000ac4d10) Reply frame received for 3\nI0215 01:23:00.258132    4704 log.go:172] (0xc000ac4d10) (0xc00089c000) Create stream\nI0215 01:23:00.258140    4704 log.go:172] (0xc000ac4d10) (0xc00089c000) Stream added, broadcasting: 5\nI0215 01:23:00.259051    4704 log.go:172] (0xc000ac4d10) Reply frame received for 5\nI0215 01:23:00.359091    4704 log.go:172] (0xc000ac4d10) Data frame received for 5\nI0215 01:23:00.359144    4704 log.go:172] (0xc00089c000) (5) Data frame handling\nI0215 01:23:00.359191    4704 log.go:172] (0xc00089c000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0215 01:23:00.368877    4704 log.go:172] (0xc000ac4d10) Data frame received for 5\nI0215 01:23:00.369141    4704 log.go:172] (0xc00089c000) (5) Data frame handling\nI0215 01:23:00.369183    4704 log.go:172] (0xc00089c000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0215 01:23:00.441374    4704 log.go:172] (0xc000ac4d10) Data frame received for 1\nI0215 01:23:00.441571    4704 log.go:172] (0xc000ac4d10) (0xc000740000) Stream removed, broadcasting: 3\nI0215 01:23:00.441996    4704 log.go:172] (0xc0006ee960) (1) Data frame handling\nI0215 01:23:00.442444    4704 log.go:172] (0xc0006ee960) (1) Data frame sent\nI0215 01:23:00.442464    4704 log.go:172] (0xc000ac4d10) (0xc00089c000) Stream removed, broadcasting: 5\nI0215 01:23:00.442624    4704 log.go:172] (0xc000ac4d10) (0xc0006ee960) Stream removed, broadcasting: 1\nI0215 01:23:00.442738    4704 log.go:172] (0xc000ac4d10) Go away received\nI0215 01:23:00.443878    4704 log.go:172] (0xc000ac4d10) (0xc0006ee960) Stream removed, broadcasting: 1\nI0215 01:23:00.443955    4704 log.go:172] (0xc000ac4d10) (0xc000740000) Stream removed, broadcasting: 3\nI0215 01:23:00.443987    4704 log.go:172] (0xc000ac4d10) (0xc00089c000) Stream removed, broadcasting: 5\n"
Feb 15 01:23:00.454: INFO: stdout: ""
Feb 15 01:23:00.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5142 execpodcs7ld -- /bin/sh -x -c nc -zv -t -w 2 10.96.109.193 80'
Feb 15 01:23:00.766: INFO: stderr: "I0215 01:23:00.592783    4723 log.go:172] (0xc000b76840) (0xc000b54320) Create stream\nI0215 01:23:00.593329    4723 log.go:172] (0xc000b76840) (0xc000b54320) Stream added, broadcasting: 1\nI0215 01:23:00.596019    4723 log.go:172] (0xc000b76840) Reply frame received for 1\nI0215 01:23:00.596067    4723 log.go:172] (0xc000b76840) (0xc000a560a0) Create stream\nI0215 01:23:00.596080    4723 log.go:172] (0xc000b76840) (0xc000a560a0) Stream added, broadcasting: 3\nI0215 01:23:00.597665    4723 log.go:172] (0xc000b76840) Reply frame received for 3\nI0215 01:23:00.597711    4723 log.go:172] (0xc000b76840) (0xc0009c20a0) Create stream\nI0215 01:23:00.597719    4723 log.go:172] (0xc000b76840) (0xc0009c20a0) Stream added, broadcasting: 5\nI0215 01:23:00.599120    4723 log.go:172] (0xc000b76840) Reply frame received for 5\nI0215 01:23:00.649234    4723 log.go:172] (0xc000b76840) Data frame received for 5\nI0215 01:23:00.649337    4723 log.go:172] (0xc0009c20a0) (5) Data frame handling\nI0215 01:23:00.649370    4723 log.go:172] (0xc0009c20a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.109.193 80\nI0215 01:23:00.652484    4723 log.go:172] (0xc000b76840) Data frame received for 5\nI0215 01:23:00.652549    4723 log.go:172] (0xc0009c20a0) (5) Data frame handling\nI0215 01:23:00.652577    4723 log.go:172] (0xc0009c20a0) (5) Data frame sent\nConnection to 10.96.109.193 80 port [tcp/http] succeeded!\nI0215 01:23:00.756696    4723 log.go:172] (0xc000b76840) Data frame received for 1\nI0215 01:23:00.756757    4723 log.go:172] (0xc000b76840) (0xc0009c20a0) Stream removed, broadcasting: 5\nI0215 01:23:00.756824    4723 log.go:172] (0xc000b54320) (1) Data frame handling\nI0215 01:23:00.756848    4723 log.go:172] (0xc000b54320) (1) Data frame sent\nI0215 01:23:00.756864    4723 log.go:172] (0xc000b76840) (0xc000a560a0) Stream removed, broadcasting: 3\nI0215 01:23:00.756949    4723 log.go:172] (0xc000b76840) (0xc000b54320) Stream removed, broadcasting: 1\nI0215 01:23:00.756974    4723 log.go:172] (0xc000b76840) Go away received\nI0215 01:23:00.758090    4723 log.go:172] (0xc000b76840) (0xc000b54320) Stream removed, broadcasting: 1\nI0215 01:23:00.758137    4723 log.go:172] (0xc000b76840) (0xc000a560a0) Stream removed, broadcasting: 3\nI0215 01:23:00.758146    4723 log.go:172] (0xc000b76840) (0xc0009c20a0) Stream removed, broadcasting: 5\n"
Feb 15 01:23:00.766: INFO: stdout: ""
Feb 15 01:23:00.766: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:23:00.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5142" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:25.277 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":280,"completed":243,"skipped":3995,"failed":0}
S
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:23:00.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-6809, will wait for the garbage collector to delete the pods
Feb 15 01:23:19.079: INFO: Deleting Job.batch foo took: 24.825515ms
Feb 15 01:23:19.480: INFO: Terminating Job.batch foo pods took: 401.129911ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:24:02.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6809" for this suite.

• [SLOW TEST:61.473 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":280,"completed":244,"skipped":3996,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:24:02.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Feb 15 01:24:11.184: INFO: Successfully updated pod "labelsupdate7476044a-9f54-434d-89cf-7a8ed1cc6fc1"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:24:13.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8465" for this suite.

• [SLOW TEST:10.844 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":245,"skipped":4016,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:24:13.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:24:13.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3332" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":280,"completed":246,"skipped":4053,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:24:13.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Feb 15 01:24:26.335: INFO: Successfully updated pod "annotationupdate62da0e0e-7644-4991-b42e-3bcf6db93b10"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:24:28.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7264" for this suite.

• [SLOW TEST:14.961 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":247,"skipped":4104,"failed":0}
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:24:28.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-70fb75c3-e1ee-4c42-abe3-acf4d14e8d3a
STEP: Creating a pod to test consume secrets
Feb 15 01:24:28.539: INFO: Waiting up to 5m0s for pod "pod-secrets-f250001b-7db3-4588-b95d-e0408e0057cd" in namespace "secrets-4984" to be "success or failure"
Feb 15 01:24:28.567: INFO: Pod "pod-secrets-f250001b-7db3-4588-b95d-e0408e0057cd": Phase="Pending", Reason="", readiness=false. Elapsed: 28.041586ms
Feb 15 01:24:30.576: INFO: Pod "pod-secrets-f250001b-7db3-4588-b95d-e0408e0057cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037009497s
Feb 15 01:24:32.588: INFO: Pod "pod-secrets-f250001b-7db3-4588-b95d-e0408e0057cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048959925s
Feb 15 01:24:34.596: INFO: Pod "pod-secrets-f250001b-7db3-4588-b95d-e0408e0057cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05714681s
Feb 15 01:24:36.605: INFO: Pod "pod-secrets-f250001b-7db3-4588-b95d-e0408e0057cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066154404s
Feb 15 01:24:38.620: INFO: Pod "pod-secrets-f250001b-7db3-4588-b95d-e0408e0057cd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.080528871s
Feb 15 01:24:40.635: INFO: Pod "pod-secrets-f250001b-7db3-4588-b95d-e0408e0057cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.095589738s
STEP: Saw pod success
Feb 15 01:24:40.635: INFO: Pod "pod-secrets-f250001b-7db3-4588-b95d-e0408e0057cd" satisfied condition "success or failure"
Feb 15 01:24:40.637: INFO: Trying to get logs from node jerma-node pod pod-secrets-f250001b-7db3-4588-b95d-e0408e0057cd container secret-volume-test: 
STEP: delete the pod
Feb 15 01:24:40.673: INFO: Waiting for pod pod-secrets-f250001b-7db3-4588-b95d-e0408e0057cd to disappear
Feb 15 01:24:40.676: INFO: Pod pod-secrets-f250001b-7db3-4588-b95d-e0408e0057cd no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:24:40.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4984" for this suite.

• [SLOW TEST:12.272 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":248,"skipped":4104,"failed":0}
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:24:40.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 01:24:40.888: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"0397a5b0-7c3c-41fd-bfa3-2315d64deff2", Controller:(*bool)(0xc004652dd2), BlockOwnerDeletion:(*bool)(0xc004652dd3)}}
Feb 15 01:24:40.900: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"dd8739ba-5a62-4c3f-a155-679f840752ac", Controller:(*bool)(0xc00409bb1a), BlockOwnerDeletion:(*bool)(0xc00409bb1b)}}
Feb 15 01:24:40.928: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"4d700e96-c1d1-4208-a44e-efa0b897d58f", Controller:(*bool)(0xc00409bcfa), BlockOwnerDeletion:(*bool)(0xc00409bcfb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:24:45.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9364" for this suite.

• [SLOW TEST:5.375 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":280,"completed":249,"skipped":4104,"failed":0}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:24:46.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-gjhq
STEP: Creating a pod to test atomic-volume-subpath
Feb 15 01:24:46.391: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gjhq" in namespace "subpath-5808" to be "success or failure"
Feb 15 01:24:46.412: INFO: Pod "pod-subpath-test-configmap-gjhq": Phase="Pending", Reason="", readiness=false. Elapsed: 21.187618ms
Feb 15 01:24:48.420: INFO: Pod "pod-subpath-test-configmap-gjhq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02893046s
Feb 15 01:24:50.762: INFO: Pod "pod-subpath-test-configmap-gjhq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.370647547s
Feb 15 01:24:52.772: INFO: Pod "pod-subpath-test-configmap-gjhq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.380602323s
Feb 15 01:24:54.781: INFO: Pod "pod-subpath-test-configmap-gjhq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.389459216s
Feb 15 01:24:56.791: INFO: Pod "pod-subpath-test-configmap-gjhq": Phase="Running", Reason="", readiness=true. Elapsed: 10.39954952s
Feb 15 01:24:58.803: INFO: Pod "pod-subpath-test-configmap-gjhq": Phase="Running", Reason="", readiness=true. Elapsed: 12.41179278s
Feb 15 01:25:00.818: INFO: Pod "pod-subpath-test-configmap-gjhq": Phase="Running", Reason="", readiness=true. Elapsed: 14.426449812s
Feb 15 01:25:02.827: INFO: Pod "pod-subpath-test-configmap-gjhq": Phase="Running", Reason="", readiness=true. Elapsed: 16.436303452s
Feb 15 01:25:04.835: INFO: Pod "pod-subpath-test-configmap-gjhq": Phase="Running", Reason="", readiness=true. Elapsed: 18.444385976s
Feb 15 01:25:06.847: INFO: Pod "pod-subpath-test-configmap-gjhq": Phase="Running", Reason="", readiness=true. Elapsed: 20.456220398s
Feb 15 01:25:08.861: INFO: Pod "pod-subpath-test-configmap-gjhq": Phase="Running", Reason="", readiness=true. Elapsed: 22.469546814s
Feb 15 01:25:10.873: INFO: Pod "pod-subpath-test-configmap-gjhq": Phase="Running", Reason="", readiness=true. Elapsed: 24.482087114s
Feb 15 01:25:12.886: INFO: Pod "pod-subpath-test-configmap-gjhq": Phase="Running", Reason="", readiness=true. Elapsed: 26.495324192s
Feb 15 01:25:14.893: INFO: Pod "pod-subpath-test-configmap-gjhq": Phase="Running", Reason="", readiness=true. Elapsed: 28.502374049s
Feb 15 01:25:16.900: INFO: Pod "pod-subpath-test-configmap-gjhq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.508641339s
STEP: Saw pod success
Feb 15 01:25:16.900: INFO: Pod "pod-subpath-test-configmap-gjhq" satisfied condition "success or failure"
Feb 15 01:25:16.904: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-gjhq container test-container-subpath-configmap-gjhq: 
STEP: delete the pod
Feb 15 01:25:17.023: INFO: Waiting for pod pod-subpath-test-configmap-gjhq to disappear
Feb 15 01:25:17.026: INFO: Pod pod-subpath-test-configmap-gjhq no longer exists
STEP: Deleting pod pod-subpath-test-configmap-gjhq
Feb 15 01:25:17.026: INFO: Deleting pod "pod-subpath-test-configmap-gjhq" in namespace "subpath-5808"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:25:17.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5808" for this suite.

• [SLOW TEST:30.974 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":280,"completed":250,"skipped":4110,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:25:17.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod busybox-31f2e9ee-5dee-4d53-9f88-230947c56b24 in namespace container-probe-816
Feb 15 01:25:25.299: INFO: Started pod busybox-31f2e9ee-5dee-4d53-9f88-230947c56b24 in namespace container-probe-816
STEP: checking the pod's current state and verifying that restartCount is present
Feb 15 01:25:25.304: INFO: Initial restart count of pod busybox-31f2e9ee-5dee-4d53-9f88-230947c56b24 is 0
Feb 15 01:26:13.575: INFO: Restart count of pod container-probe-816/busybox-31f2e9ee-5dee-4d53-9f88-230947c56b24 is now 1 (48.271150926s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:26:13.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-816" for this suite.

• [SLOW TEST:56.603 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":251,"skipped":4120,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:26:13.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating replication controller my-hostname-basic-aa279ecd-5647-406f-ae90-4c8b764630b4
Feb 15 01:26:13.837: INFO: Pod name my-hostname-basic-aa279ecd-5647-406f-ae90-4c8b764630b4: Found 0 pods out of 1
Feb 15 01:26:18.852: INFO: Pod name my-hostname-basic-aa279ecd-5647-406f-ae90-4c8b764630b4: Found 1 pods out of 1
Feb 15 01:26:18.852: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-aa279ecd-5647-406f-ae90-4c8b764630b4" are running
Feb 15 01:26:24.892: INFO: Pod "my-hostname-basic-aa279ecd-5647-406f-ae90-4c8b764630b4-84g26" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 01:26:13 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 01:26:13 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-aa279ecd-5647-406f-ae90-4c8b764630b4]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 01:26:13 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-aa279ecd-5647-406f-ae90-4c8b764630b4]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 01:26:13 +0000 UTC Reason: Message:}])
Feb 15 01:26:24.892: INFO: Trying to dial the pod
Feb 15 01:26:29.921: INFO: Controller my-hostname-basic-aa279ecd-5647-406f-ae90-4c8b764630b4: Got expected result from replica 1 [my-hostname-basic-aa279ecd-5647-406f-ae90-4c8b764630b4-84g26]: "my-hostname-basic-aa279ecd-5647-406f-ae90-4c8b764630b4-84g26", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:26:29.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3927" for this suite.

• [SLOW TEST:16.292 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":280,"completed":252,"skipped":4132,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:26:29.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 15 01:26:46.178: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 01:26:46.187: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 01:26:48.187: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 01:26:48.277: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 01:26:50.187: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 01:26:50.194: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 01:26:52.187: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 01:26:52.197: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 01:26:54.187: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 01:26:54.192: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 01:26:56.187: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 01:26:56.196: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 01:26:58.187: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 01:26:58.194: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 01:27:00.187: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 01:27:00.204: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 01:27:02.187: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 01:27:02.221: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 01:27:04.187: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 01:27:04.202: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:27:04.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-781" for this suite.

• [SLOW TEST:34.313 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":280,"completed":253,"skipped":4143,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:27:04.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 01:27:08.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Feb 15 01:27:08.418: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-15T01:27:08Z generation:1 name:name1 resourceVersion:8498646 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cef4e57f-812f-42af-b8b1-182d1d21517b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Feb 15 01:27:18.428: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-15T01:27:18Z generation:1 name:name2 resourceVersion:8498687 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:88bd7b32-7653-437a-b2d2-35d00d7bc5a5] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Feb 15 01:27:28.439: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-15T01:27:08Z generation:2 name:name1 resourceVersion:8498709 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cef4e57f-812f-42af-b8b1-182d1d21517b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Feb 15 01:27:38.452: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-15T01:27:18Z generation:2 name:name2 resourceVersion:8498733 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:88bd7b32-7653-437a-b2d2-35d00d7bc5a5] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Feb 15 01:27:48.466: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-15T01:27:08Z generation:2 name:name1 resourceVersion:8498757 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cef4e57f-812f-42af-b8b1-182d1d21517b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Feb 15 01:27:58.482: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-15T01:27:18Z generation:2 name:name2 resourceVersion:8498779 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:88bd7b32-7653-437a-b2d2-35d00d7bc5a5] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:28:09.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-3624" for this suite.

• [SLOW TEST:64.771 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":280,"completed":254,"skipped":4154,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:28:09.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 01:28:09.105: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:28:10.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5285" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":280,"completed":255,"skipped":4165,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:28:10.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a replication controller
Feb 15 01:28:10.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-510'
Feb 15 01:28:12.400: INFO: stderr: ""
Feb 15 01:28:12.400: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 15 01:28:12.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-510'
Feb 15 01:28:12.709: INFO: stderr: ""
Feb 15 01:28:12.709: INFO: stdout: "update-demo-nautilus-2fbv4 update-demo-nautilus-qj8r4 "
Feb 15 01:28:12.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2fbv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-510'
Feb 15 01:28:12.888: INFO: stderr: ""
Feb 15 01:28:12.889: INFO: stdout: ""
Feb 15 01:28:12.889: INFO: update-demo-nautilus-2fbv4 is created but not running
Feb 15 01:28:17.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-510'
Feb 15 01:28:18.420: INFO: stderr: ""
Feb 15 01:28:18.420: INFO: stdout: "update-demo-nautilus-2fbv4 update-demo-nautilus-qj8r4 "
Feb 15 01:28:18.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2fbv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-510'
Feb 15 01:28:19.215: INFO: stderr: ""
Feb 15 01:28:19.215: INFO: stdout: ""
Feb 15 01:28:19.215: INFO: update-demo-nautilus-2fbv4 is created but not running
Feb 15 01:28:24.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-510'
Feb 15 01:28:24.371: INFO: stderr: ""
Feb 15 01:28:24.371: INFO: stdout: "update-demo-nautilus-2fbv4 update-demo-nautilus-qj8r4 "
Feb 15 01:28:24.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2fbv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-510'
Feb 15 01:28:24.529: INFO: stderr: ""
Feb 15 01:28:24.529: INFO: stdout: "true"
Feb 15 01:28:24.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2fbv4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-510'
Feb 15 01:28:24.710: INFO: stderr: ""
Feb 15 01:28:24.710: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 15 01:28:24.710: INFO: validating pod update-demo-nautilus-2fbv4
Feb 15 01:28:24.734: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 15 01:28:24.734: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 15 01:28:24.734: INFO: update-demo-nautilus-2fbv4 is verified up and running
Feb 15 01:28:24.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qj8r4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-510'
Feb 15 01:28:24.845: INFO: stderr: ""
Feb 15 01:28:24.845: INFO: stdout: "true"
Feb 15 01:28:24.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qj8r4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-510'
Feb 15 01:28:24.986: INFO: stderr: ""
Feb 15 01:28:24.986: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 15 01:28:24.986: INFO: validating pod update-demo-nautilus-qj8r4
Feb 15 01:28:25.019: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 15 01:28:25.020: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 15 01:28:25.020: INFO: update-demo-nautilus-qj8r4 is verified up and running
STEP: using delete to clean up resources
Feb 15 01:28:25.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-510'
Feb 15 01:28:25.109: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 15 01:28:25.109: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 15 01:28:25.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-510'
Feb 15 01:28:25.223: INFO: stderr: "No resources found in kubectl-510 namespace.\n"
Feb 15 01:28:25.223: INFO: stdout: ""
Feb 15 01:28:25.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-510 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 15 01:28:25.305: INFO: stderr: ""
Feb 15 01:28:25.305: INFO: stdout: "update-demo-nautilus-2fbv4\nupdate-demo-nautilus-qj8r4\n"
Feb 15 01:28:25.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-510'
Feb 15 01:28:25.979: INFO: stderr: "No resources found in kubectl-510 namespace.\n"
Feb 15 01:28:25.979: INFO: stdout: ""
Feb 15 01:28:25.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-510 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 15 01:28:26.112: INFO: stderr: ""
Feb 15 01:28:26.113: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:28:26.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-510" for this suite.

• [SLOW TEST:15.720 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":280,"completed":256,"skipped":4185,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:28:26.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-d6acee65-2ed3-4d18-9e46-8722ab0d3f8e
STEP: Creating a pod to test consume configMaps
Feb 15 01:28:27.165: INFO: Waiting up to 5m0s for pod "pod-configmaps-6900890c-43d3-4aca-a386-0524ac75aa35" in namespace "configmap-4645" to be "success or failure"
Feb 15 01:28:27.199: INFO: Pod "pod-configmaps-6900890c-43d3-4aca-a386-0524ac75aa35": Phase="Pending", Reason="", readiness=false. Elapsed: 33.427629ms
Feb 15 01:28:29.324: INFO: Pod "pod-configmaps-6900890c-43d3-4aca-a386-0524ac75aa35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158692599s
Feb 15 01:28:31.348: INFO: Pod "pod-configmaps-6900890c-43d3-4aca-a386-0524ac75aa35": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182837659s
Feb 15 01:28:33.357: INFO: Pod "pod-configmaps-6900890c-43d3-4aca-a386-0524ac75aa35": Phase="Pending", Reason="", readiness=false. Elapsed: 6.191765105s
Feb 15 01:28:35.367: INFO: Pod "pod-configmaps-6900890c-43d3-4aca-a386-0524ac75aa35": Phase="Pending", Reason="", readiness=false. Elapsed: 8.201747457s
Feb 15 01:28:37.377: INFO: Pod "pod-configmaps-6900890c-43d3-4aca-a386-0524ac75aa35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.211842595s
STEP: Saw pod success
Feb 15 01:28:37.378: INFO: Pod "pod-configmaps-6900890c-43d3-4aca-a386-0524ac75aa35" satisfied condition "success or failure"
Feb 15 01:28:37.383: INFO: Trying to get logs from node jerma-node pod pod-configmaps-6900890c-43d3-4aca-a386-0524ac75aa35 container configmap-volume-test: 
STEP: delete the pod
Feb 15 01:28:37.651: INFO: Waiting for pod pod-configmaps-6900890c-43d3-4aca-a386-0524ac75aa35 to disappear
Feb 15 01:28:37.656: INFO: Pod pod-configmaps-6900890c-43d3-4aca-a386-0524ac75aa35 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:28:37.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4645" for this suite.

• [SLOW TEST:11.542 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":257,"skipped":4208,"failed":0}
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:28:37.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod busybox-e058680a-032e-45cf-bfec-c64d57bda562 in namespace container-probe-529
Feb 15 01:28:45.870: INFO: Started pod busybox-e058680a-032e-45cf-bfec-c64d57bda562 in namespace container-probe-529
STEP: checking the pod's current state and verifying that restartCount is present
Feb 15 01:28:45.875: INFO: Initial restart count of pod busybox-e058680a-032e-45cf-bfec-c64d57bda562 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:32:46.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-529" for this suite.

• [SLOW TEST:249.088 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":258,"skipped":4210,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:32:46.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 15 01:32:47.461: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 15 01:32:49.498: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717327167, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717327167, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717327167, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717327167, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:32:51.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717327167, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717327167, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717327167, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717327167, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:32:53.508: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717327167, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717327167, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717327167, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717327167, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 01:32:55.505: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717327167, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717327167, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717327167, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717327167, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 15 01:32:58.567: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 01:32:58.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:32:59.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2362" for this suite.
STEP: Destroying namespace "webhook-2362-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:13.297 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":280,"completed":259,"skipped":4211,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:33:00.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 15 01:33:00.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb 15 01:33:02.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8484 create -f -'
Feb 15 01:33:10.273: INFO: stderr: ""
Feb 15 01:33:10.273: INFO: stdout: "e2e-test-crd-publish-openapi-1263-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Feb 15 01:33:10.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8484 delete e2e-test-crd-publish-openapi-1263-crds test-cr'
Feb 15 01:33:10.456: INFO: stderr: ""
Feb 15 01:33:10.456: INFO: stdout: "e2e-test-crd-publish-openapi-1263-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Feb 15 01:33:10.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8484 apply -f -'
Feb 15 01:33:10.828: INFO: stderr: ""
Feb 15 01:33:10.828: INFO: stdout: "e2e-test-crd-publish-openapi-1263-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Feb 15 01:33:10.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8484 delete e2e-test-crd-publish-openapi-1263-crds test-cr'
Feb 15 01:33:11.007: INFO: stderr: ""
Feb 15 01:33:11.007: INFO: stdout: "e2e-test-crd-publish-openapi-1263-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Feb 15 01:33:11.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1263-crds'
Feb 15 01:33:11.455: INFO: stderr: ""
Feb 15 01:33:11.455: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1263-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:33:14.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8484" for this suite.

• [SLOW TEST:14.431 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":280,"completed":260,"skipped":4234,"failed":0}
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:33:14.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a Namespace
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:33:14.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5568" for this suite.
STEP: Destroying namespace "nspatchtest-6e68b598-246f-4983-b28a-d5f82c699d1d-7048" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":280,"completed":261,"skipped":4234,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:33:14.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-3dd7f531-9e18-4bdc-b9e0-8056623e86c6
STEP: Creating a pod to test consume configMaps
Feb 15 01:33:14.923: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3f4f69fd-26c1-42b3-9aa3-dbd0e8619a68" in namespace "projected-6166" to be "success or failure"
Feb 15 01:33:14.938: INFO: Pod "pod-projected-configmaps-3f4f69fd-26c1-42b3-9aa3-dbd0e8619a68": Phase="Pending", Reason="", readiness=false. Elapsed: 14.08765ms
Feb 15 01:33:16.949: INFO: Pod "pod-projected-configmaps-3f4f69fd-26c1-42b3-9aa3-dbd0e8619a68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025549711s
Feb 15 01:33:18.992: INFO: Pod "pod-projected-configmaps-3f4f69fd-26c1-42b3-9aa3-dbd0e8619a68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068373109s
Feb 15 01:33:21.071: INFO: Pod "pod-projected-configmaps-3f4f69fd-26c1-42b3-9aa3-dbd0e8619a68": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1469662s
Feb 15 01:33:23.078: INFO: Pod "pod-projected-configmaps-3f4f69fd-26c1-42b3-9aa3-dbd0e8619a68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.154314721s
STEP: Saw pod success
Feb 15 01:33:23.078: INFO: Pod "pod-projected-configmaps-3f4f69fd-26c1-42b3-9aa3-dbd0e8619a68" satisfied condition "success or failure"
Feb 15 01:33:23.085: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-3f4f69fd-26c1-42b3-9aa3-dbd0e8619a68 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 15 01:33:23.146: INFO: Waiting for pod pod-projected-configmaps-3f4f69fd-26c1-42b3-9aa3-dbd0e8619a68 to disappear
Feb 15 01:33:23.157: INFO: Pod pod-projected-configmaps-3f4f69fd-26c1-42b3-9aa3-dbd0e8619a68 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:33:23.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6166" for this suite.

• [SLOW TEST:8.424 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":262,"skipped":4254,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:33:23.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 15 01:33:23.341: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04d6dc40-92a7-4200-8a15-0c288dbbf126" in namespace "downward-api-9794" to be "success or failure"
Feb 15 01:33:23.408: INFO: Pod "downwardapi-volume-04d6dc40-92a7-4200-8a15-0c288dbbf126": Phase="Pending", Reason="", readiness=false. Elapsed: 67.530696ms
Feb 15 01:33:25.416: INFO: Pod "downwardapi-volume-04d6dc40-92a7-4200-8a15-0c288dbbf126": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075413488s
Feb 15 01:33:27.431: INFO: Pod "downwardapi-volume-04d6dc40-92a7-4200-8a15-0c288dbbf126": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09003323s
Feb 15 01:33:29.436: INFO: Pod "downwardapi-volume-04d6dc40-92a7-4200-8a15-0c288dbbf126": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094868766s
Feb 15 01:33:31.445: INFO: Pod "downwardapi-volume-04d6dc40-92a7-4200-8a15-0c288dbbf126": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.104022664s
STEP: Saw pod success
Feb 15 01:33:31.445: INFO: Pod "downwardapi-volume-04d6dc40-92a7-4200-8a15-0c288dbbf126" satisfied condition "success or failure"
Feb 15 01:33:31.451: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-04d6dc40-92a7-4200-8a15-0c288dbbf126 container client-container: 
STEP: delete the pod
Feb 15 01:33:31.483: INFO: Waiting for pod downwardapi-volume-04d6dc40-92a7-4200-8a15-0c288dbbf126 to disappear
Feb 15 01:33:31.491: INFO: Pod downwardapi-volume-04d6dc40-92a7-4200-8a15-0c288dbbf126 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:33:31.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9794" for this suite.

• [SLOW TEST:8.323 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":263,"skipped":4258,"failed":0}
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:33:31.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 15 01:33:39.930: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:33:40.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7303" for this suite.

• [SLOW TEST:8.548 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":264,"skipped":4258,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:33:40.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Feb 15 01:33:48.783: INFO: Successfully updated pod "labelsupdate1625058c-6007-47f1-88f7-35632528802b"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:33:50.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4931" for this suite.

• [SLOW TEST:10.803 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":265,"skipped":4293,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:33:50.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-upd-7a27b7cb-d996-494b-8856-c0243b41cba7
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:34:01.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5028" for this suite.

• [SLOW TEST:10.210 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":266,"skipped":4316,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:34:01.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Feb 15 01:34:01.240: INFO: >>> kubeConfig: /root/.kube/config
Feb 15 01:34:03.233: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:34:15.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6820" for this suite.

• [SLOW TEST:14.233 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":280,"completed":267,"skipped":4318,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:34:15.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 15 01:34:15.464: INFO: Waiting up to 5m0s for pod "downwardapi-volume-62b31d02-1a8d-4c51-aaf0-9040342ffda7" in namespace "projected-6996" to be "success or failure"
Feb 15 01:34:15.477: INFO: Pod "downwardapi-volume-62b31d02-1a8d-4c51-aaf0-9040342ffda7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.881829ms
Feb 15 01:34:17.493: INFO: Pod "downwardapi-volume-62b31d02-1a8d-4c51-aaf0-9040342ffda7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028269859s
Feb 15 01:34:19.538: INFO: Pod "downwardapi-volume-62b31d02-1a8d-4c51-aaf0-9040342ffda7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073163507s
Feb 15 01:34:21.544: INFO: Pod "downwardapi-volume-62b31d02-1a8d-4c51-aaf0-9040342ffda7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079702112s
Feb 15 01:34:23.551: INFO: Pod "downwardapi-volume-62b31d02-1a8d-4c51-aaf0-9040342ffda7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086735561s
Feb 15 01:34:25.559: INFO: Pod "downwardapi-volume-62b31d02-1a8d-4c51-aaf0-9040342ffda7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.094468205s
STEP: Saw pod success
Feb 15 01:34:25.559: INFO: Pod "downwardapi-volume-62b31d02-1a8d-4c51-aaf0-9040342ffda7" satisfied condition "success or failure"
Feb 15 01:34:25.564: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-62b31d02-1a8d-4c51-aaf0-9040342ffda7 container client-container: 
STEP: delete the pod
Feb 15 01:34:25.609: INFO: Waiting for pod downwardapi-volume-62b31d02-1a8d-4c51-aaf0-9040342ffda7 to disappear
Feb 15 01:34:25.682: INFO: Pod downwardapi-volume-62b31d02-1a8d-4c51-aaf0-9040342ffda7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:34:25.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6996" for this suite.

• [SLOW TEST:10.434 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":268,"skipped":4330,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:34:25.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 15 01:34:25.876: INFO: Waiting up to 5m0s for pod "pod-cbbf85d8-5fe4-4ac7-bab2-c9256c687fbf" in namespace "emptydir-7928" to be "success or failure"
Feb 15 01:34:25.887: INFO: Pod "pod-cbbf85d8-5fe4-4ac7-bab2-c9256c687fbf": Phase="Pending", Reason="", readiness=false. Elapsed: 11.622942ms
Feb 15 01:34:27.894: INFO: Pod "pod-cbbf85d8-5fe4-4ac7-bab2-c9256c687fbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018464924s
Feb 15 01:34:29.934: INFO: Pod "pod-cbbf85d8-5fe4-4ac7-bab2-c9256c687fbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05857678s
Feb 15 01:34:31.954: INFO: Pod "pod-cbbf85d8-5fe4-4ac7-bab2-c9256c687fbf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078517231s
Feb 15 01:34:33.966: INFO: Pod "pod-cbbf85d8-5fe4-4ac7-bab2-c9256c687fbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090029908s
STEP: Saw pod success
Feb 15 01:34:33.966: INFO: Pod "pod-cbbf85d8-5fe4-4ac7-bab2-c9256c687fbf" satisfied condition "success or failure"
Feb 15 01:34:33.972: INFO: Trying to get logs from node jerma-node pod pod-cbbf85d8-5fe4-4ac7-bab2-c9256c687fbf container test-container: 
STEP: delete the pod
Feb 15 01:34:34.087: INFO: Waiting for pod pod-cbbf85d8-5fe4-4ac7-bab2-c9256c687fbf to disappear
Feb 15 01:34:34.143: INFO: Pod pod-cbbf85d8-5fe4-4ac7-bab2-c9256c687fbf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:34:34.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7928" for this suite.

• [SLOW TEST:8.422 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":269,"skipped":4335,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:34:34.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 15 01:34:34.408: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9c65f4c6-16a1-4e1c-9118-b6d6b1702620" in namespace "projected-5617" to be "success or failure"
Feb 15 01:34:34.416: INFO: Pod "downwardapi-volume-9c65f4c6-16a1-4e1c-9118-b6d6b1702620": Phase="Pending", Reason="", readiness=false. Elapsed: 8.609173ms
Feb 15 01:34:36.572: INFO: Pod "downwardapi-volume-9c65f4c6-16a1-4e1c-9118-b6d6b1702620": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164184775s
Feb 15 01:34:38.586: INFO: Pod "downwardapi-volume-9c65f4c6-16a1-4e1c-9118-b6d6b1702620": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178481416s
Feb 15 01:34:40.599: INFO: Pod "downwardapi-volume-9c65f4c6-16a1-4e1c-9118-b6d6b1702620": Phase="Pending", Reason="", readiness=false. Elapsed: 6.19088571s
Feb 15 01:34:42.620: INFO: Pod "downwardapi-volume-9c65f4c6-16a1-4e1c-9118-b6d6b1702620": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.212364309s
STEP: Saw pod success
Feb 15 01:34:42.621: INFO: Pod "downwardapi-volume-9c65f4c6-16a1-4e1c-9118-b6d6b1702620" satisfied condition "success or failure"
Feb 15 01:34:42.628: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9c65f4c6-16a1-4e1c-9118-b6d6b1702620 container client-container: 
STEP: delete the pod
Feb 15 01:34:42.745: INFO: Waiting for pod downwardapi-volume-9c65f4c6-16a1-4e1c-9118-b6d6b1702620 to disappear
Feb 15 01:34:42.753: INFO: Pod downwardapi-volume-9c65f4c6-16a1-4e1c-9118-b6d6b1702620 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:34:42.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5617" for this suite.

• [SLOW TEST:8.605 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":270,"skipped":4339,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:34:42.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 15 01:34:42.913: INFO: Waiting up to 5m0s for pod "pod-fa720b42-8dcc-4d2b-a7d7-58a28739d7bd" in namespace "emptydir-4177" to be "success or failure"
Feb 15 01:34:42.931: INFO: Pod "pod-fa720b42-8dcc-4d2b-a7d7-58a28739d7bd": Phase="Pending", Reason="", readiness=false. Elapsed: 17.996658ms
Feb 15 01:34:44.995: INFO: Pod "pod-fa720b42-8dcc-4d2b-a7d7-58a28739d7bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082509909s
Feb 15 01:34:47.001: INFO: Pod "pod-fa720b42-8dcc-4d2b-a7d7-58a28739d7bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088505544s
Feb 15 01:34:49.011: INFO: Pod "pod-fa720b42-8dcc-4d2b-a7d7-58a28739d7bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098633342s
Feb 15 01:34:51.019: INFO: Pod "pod-fa720b42-8dcc-4d2b-a7d7-58a28739d7bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10647762s
STEP: Saw pod success
Feb 15 01:34:51.019: INFO: Pod "pod-fa720b42-8dcc-4d2b-a7d7-58a28739d7bd" satisfied condition "success or failure"
Feb 15 01:34:51.023: INFO: Trying to get logs from node jerma-node pod pod-fa720b42-8dcc-4d2b-a7d7-58a28739d7bd container test-container: 
STEP: delete the pod
Feb 15 01:34:51.517: INFO: Waiting for pod pod-fa720b42-8dcc-4d2b-a7d7-58a28739d7bd to disappear
Feb 15 01:34:51.525: INFO: Pod pod-fa720b42-8dcc-4d2b-a7d7-58a28739d7bd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:34:51.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4177" for this suite.

• [SLOW TEST:8.765 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":271,"skipped":4339,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:34:51.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:34:58.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2877" for this suite.

• [SLOW TEST:7.265 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":280,"completed":272,"skipped":4381,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:34:58.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:35:06.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3801" for this suite.

• [SLOW TEST:8.188 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":273,"skipped":4420,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:35:06.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:35:23.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8032" for this suite.

• [SLOW TEST:16.639 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":280,"completed":274,"skipped":4438,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:35:23.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 15 01:35:23.935: INFO: Waiting up to 5m0s for pod "pod-59cc2078-5d14-45a9-9783-fa15240a7ec9" in namespace "emptydir-581" to be "success or failure"
Feb 15 01:35:23.956: INFO: Pod "pod-59cc2078-5d14-45a9-9783-fa15240a7ec9": Phase="Pending", Reason="", readiness=false. Elapsed: 19.898788ms
Feb 15 01:35:25.965: INFO: Pod "pod-59cc2078-5d14-45a9-9783-fa15240a7ec9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029035024s
Feb 15 01:35:27.973: INFO: Pod "pod-59cc2078-5d14-45a9-9783-fa15240a7ec9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037397577s
Feb 15 01:35:29.986: INFO: Pod "pod-59cc2078-5d14-45a9-9783-fa15240a7ec9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050589139s
Feb 15 01:35:32.242: INFO: Pod "pod-59cc2078-5d14-45a9-9783-fa15240a7ec9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.305872043s
STEP: Saw pod success
Feb 15 01:35:32.242: INFO: Pod "pod-59cc2078-5d14-45a9-9783-fa15240a7ec9" satisfied condition "success or failure"
Feb 15 01:35:32.246: INFO: Trying to get logs from node jerma-node pod pod-59cc2078-5d14-45a9-9783-fa15240a7ec9 container test-container: 
STEP: delete the pod
Feb 15 01:35:33.137: INFO: Waiting for pod pod-59cc2078-5d14-45a9-9783-fa15240a7ec9 to disappear
Feb 15 01:35:33.155: INFO: Pod pod-59cc2078-5d14-45a9-9783-fa15240a7ec9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:35:33.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-581" for this suite.

• [SLOW TEST:9.681 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":275,"skipped":4466,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:35:33.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: fetching services
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:35:33.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8673" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":280,"completed":276,"skipped":4488,"failed":0}
SSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:35:33.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating server pod server in namespace prestop-5527
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-5527
STEP: Deleting pre-stop pod
Feb 15 01:35:54.965: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:35:54.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-5527" for this suite.

• [SLOW TEST:21.506 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":280,"completed":277,"skipped":4494,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:35:55.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Feb 15 01:35:55.133: INFO: Waiting up to 5m0s for pod "downward-api-586a1979-f1cf-4a2c-aa51-00e462ecef3e" in namespace "downward-api-5161" to be "success or failure"
Feb 15 01:35:55.139: INFO: Pod "downward-api-586a1979-f1cf-4a2c-aa51-00e462ecef3e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.804399ms
Feb 15 01:35:57.145: INFO: Pod "downward-api-586a1979-f1cf-4a2c-aa51-00e462ecef3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011877067s
Feb 15 01:35:59.154: INFO: Pod "downward-api-586a1979-f1cf-4a2c-aa51-00e462ecef3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020493117s
Feb 15 01:36:01.158: INFO: Pod "downward-api-586a1979-f1cf-4a2c-aa51-00e462ecef3e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025263359s
Feb 15 01:36:03.163: INFO: Pod "downward-api-586a1979-f1cf-4a2c-aa51-00e462ecef3e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.029513699s
Feb 15 01:36:05.174: INFO: Pod "downward-api-586a1979-f1cf-4a2c-aa51-00e462ecef3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.041186117s
STEP: Saw pod success
Feb 15 01:36:05.175: INFO: Pod "downward-api-586a1979-f1cf-4a2c-aa51-00e462ecef3e" satisfied condition "success or failure"
Feb 15 01:36:05.180: INFO: Trying to get logs from node jerma-node pod downward-api-586a1979-f1cf-4a2c-aa51-00e462ecef3e container dapi-container: 
STEP: delete the pod
Feb 15 01:36:05.252: INFO: Waiting for pod downward-api-586a1979-f1cf-4a2c-aa51-00e462ecef3e to disappear
Feb 15 01:36:05.318: INFO: Pod downward-api-586a1979-f1cf-4a2c-aa51-00e462ecef3e no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:36:05.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5161" for this suite.

• [SLOW TEST:10.320 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":280,"completed":278,"skipped":4535,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:36:05.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name cm-test-opt-del-17d8000f-9972-4375-a118-5a5b3281a903
STEP: Creating configMap with name cm-test-opt-upd-0f11dd08-fc7a-497c-ab00-3b454a5e7fda
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-17d8000f-9972-4375-a118-5a5b3281a903
STEP: Updating configmap cm-test-opt-upd-0f11dd08-fc7a-497c-ab00-3b454a5e7fda
STEP: Creating configMap with name cm-test-opt-create-5a00d13c-7a31-4c09-a748-c6d8d8fcbca3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:37:37.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7696" for this suite.

• [SLOW TEST:92.556 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":279,"skipped":4545,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 15 01:37:37.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 15 01:37:47.376: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 15 01:37:47.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6660" for this suite.

• [SLOW TEST:9.593 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":280,"completed":280,"skipped":4552,"failed":0}
SSSSSSSSSSSSSFeb 15 01:37:47.481: INFO: Running AfterSuite actions on all nodes
Feb 15 01:37:47.481: INFO: Running AfterSuite actions on node 1
Feb 15 01:37:47.481: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":280,"completed":280,"skipped":4565,"failed":0}

Ran 280 of 4845 Specs in 7110.639 seconds
SUCCESS! -- 280 Passed | 0 Failed | 0 Pending | 4565 Skipped
PASS